Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-2890 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-qWQOQHNVAGkV/agent.2116 SSH_AGENT_PID=2118 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14518375709131756146.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14518375709131756146.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision fbfc234895c48282e2e92b44c8c8b49745e81745 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f fbfc234895c48282e2e92b44c8c8b49745e81745 # timeout=30 Commit message: "Improve CSIT helm charts" > git rev-list --no-walk fbfc234895c48282e2e92b44c8c8b49745e81745 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins757468791448122509.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-NYvV lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-NYvV/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.2 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.35 botocore==1.34.35 bs4==0.0.2 cachetools==5.3.2 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.5.0 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.27.0 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.5 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.2.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.2.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2024.1 PyYAML==6.0.1 referencing==0.33.0 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins10399999204259747661.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins12334347158713918464.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.Vx3ZNVXuq2 ++ echo ROBOT_VENV=/tmp/tmp.Vx3ZNVXuq2 +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.Vx3ZNVXuq2 ++ source /tmp/tmp.Vx3ZNVXuq2/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.Vx3ZNVXuq2 +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.Vx3ZNVXuq2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.Vx3ZNVXuq2) ' '!=' x ']' +++ PS1='(tmp.Vx3ZNVXuq2) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.Vx3ZNVXuq2/src/onap ++ rm -rf /tmp/tmp.Vx3ZNVXuq2/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.Vx3ZNVXuq2/bin/activate + '[' -z /tmp/tmp.Vx3ZNVXuq2/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.Vx3ZNVXuq2/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.Vx3ZNVXuq2 ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.Vx3ZNVXuq2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.Vx3ZNVXuq2) ' ++ '[' 'x(tmp.Vx3ZNVXuq2) ' '!=' x ']' ++ PS1='(tmp.Vx3ZNVXuq2) (tmp.Vx3ZNVXuq2) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.Hjz3EwQKXg + cd /tmp/tmp.Hjz3EwQKXg + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7567a7c70a3c1d75aeeedc968d1304174a16651e55a60d1fb132a05e1e63a054 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:bedafcd670058dc2d485934eb404bb04ce1a30b23cf7a567427a60ae561f25c7 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:8a0432281bb5edb6d25e3d0e62d78b6aebc2875f52ecd11259251b497208c04e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Creating prometheus ... Creating mariadb ... Creating compose_zookeeper_1 ... Creating simulator ... Creating compose_zookeeper_1 ... done Creating kafka ... Creating kafka ... done Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating policy-pap ... Creating policy-pap ... done Creating simulator ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 12 seconds policy-api Up 13 seconds grafana Up 16 seconds kafka Up 18 seconds simulator Up 11 seconds prometheus Up 17 seconds compose_zookeeper_1 Up 19 seconds mariadb Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 17 seconds policy-api Up 18 seconds grafana Up 21 seconds kafka Up 23 seconds simulator Up 16 seconds prometheus Up 22 seconds compose_zookeeper_1 Up 24 seconds mariadb Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 22 seconds policy-api Up 23 seconds grafana Up 26 seconds kafka Up 28 seconds simulator Up 21 seconds prometheus Up 27 seconds compose_zookeeper_1 Up 29 seconds mariadb Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 27 seconds policy-api Up 28 seconds grafana Up 31 seconds kafka Up 33 seconds simulator Up 26 seconds prometheus Up 32 seconds compose_zookeeper_1 Up 34 seconds mariadb Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 32 seconds policy-api Up 33 seconds grafana Up 36 seconds kafka Up 38 seconds simulator Up 31 seconds prometheus Up 37 seconds compose_zookeeper_1 Up 39 seconds mariadb Up 35 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 37 seconds policy-api Up 38 seconds grafana Up 41 seconds kafka Up 43 seconds simulator Up 36 seconds prometheus Up 42 seconds compose_zookeeper_1 Up 44 seconds mariadb Up 40 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:15:00 up 4 min, 0 users, load average: 2.91, 1.35, 0.54 Tasks: 207 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 13.3 us, 2.8 sy, 0.0 ni, 79.6 id, 4.2 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.7G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 37 seconds policy-api Up 39 seconds grafana Up 42 seconds kafka Up 44 seconds simulator Up 36 seconds prometheus Up 42 seconds compose_zookeeper_1 Up 44 seconds mariadb Up 40 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS ee1f1738261a policy-apex-pdp 8.36% 182.6MiB / 31.41GiB 0.57% 9.03kB / 8.49kB 0B / 0B 48 b1b7d7896699 policy-pap 29.15% 504.8MiB / 31.41GiB 1.57% 28.8kB / 30.6kB 0B / 181MB 63 5fd5934147ab policy-api 0.12% 504.9MiB / 31.41GiB 1.57% 1e+03kB / 711kB 0B / 0B 55 bd4f8fba0e1a grafana 0.04% 57.91MiB / 31.41GiB 0.18% 19.5kB / 3.4kB 0B / 24MB 17 9f872bbe5af4 kafka 1.41% 364.9MiB / 31.41GiB 1.13% 64.5kB / 67.3kB 0B / 475kB 81 879620c6b816 simulator 0.08% 123.8MiB / 31.41GiB 0.38% 1.15kB / 0B 0B / 0B 76 a487037c08b5 prometheus 0.00% 18.45MiB / 31.41GiB 0.06% 1.64kB / 474B 98.3kB / 0B 12 1d8b3d85a938 compose_zookeeper_1 0.10% 99.92MiB / 31.41GiB 0.31% 53.1kB / 46.7kB 131kB / 385kB 60 43559b1a61a2 mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 997kB / 1.18MB 11MB / 46.3MB 37 + echo + cd /tmp/tmp.Hjz3EwQKXg + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.Hjz3EwQKXg/output.xml Log: /tmp/tmp.Hjz3EwQKXg/log.html Report: /tmp/tmp.Hjz3EwQKXg/report.html + RESULT=0 + load_set + _setopts=hxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:16:49 up 6 min, 0 users, load average: 0.60, 1.02, 0.51 Tasks: 195 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.7 us, 2.1 sy, 0.0 ni, 83.8 id, 3.3 wa, 0.0 hi, 0.0 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 21G 1.3M 6.7G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS ee1f1738261a policy-apex-pdp 0.40% 181.6MiB / 31.41GiB 0.56% 56.3kB / 80.3kB 0B / 0B 50 b1b7d7896699 policy-pap 1.09% 498.6MiB / 31.41GiB 1.55% 2.33MB / 815kB 0B / 181MB 65 5fd5934147ab policy-api 0.10% 543.3MiB / 31.41GiB 1.69% 2.49MB / 1.27MB 0B / 0B 56 bd4f8fba0e1a grafana 0.02% 65.58MiB / 31.41GiB 0.20% 20.5kB / 4.49kB 0B / 24MB 17 9f872bbe5af4 kafka 11.57% 388.6MiB / 31.41GiB 1.21% 236kB / 212kB 0B / 582kB 83 879620c6b816 simulator 0.07% 123.8MiB / 31.41GiB 0.38% 1.37kB / 0B 0B / 0B 76 a487037c08b5 prometheus 0.00% 24.73MiB / 31.41GiB 0.08% 180kB / 10.2kB 98.3kB / 0B 12 1d8b3d85a938 compose_zookeeper_1 0.09% 100MiB / 31.41GiB 0.31% 56kB / 48.3kB 131kB / 385kB 60 43559b1a61a2 mariadb 0.01% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 46.7MB 28 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, grafana, kafka, simulator, prometheus, compose_zookeeper_1, mariadb zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-02-05 23:14:18,670] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,678] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,678] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,678] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,678] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,679] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-05 23:14:18,679] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-05 23:14:18,680] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-05 23:14:18,680] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-02-05 23:14:18,681] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-02-05 23:14:18,681] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,682] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,682] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,682] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,682] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-05 23:14:18,682] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-02-05 23:14:18,697] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-02-05 23:14:18,701] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-05 23:14:18,701] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-05 23:14:18,703] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-05 23:14:18,712] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,712] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:host.name=1d8b3d85a938 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,715] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-02-05 23:14:18,716] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,716] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,717] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-05 23:14:18,717] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-05 23:14:18,717] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-05 23:14:18,717] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-05 23:14:18,717] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-05 23:14:18,718] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-05 23:14:18,718] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-05 23:14:18,718] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-05 23:14:18,720] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,720] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,720] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-05 23:14:18,720] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-05 23:14:18,720] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,738] INFO Logging initialized @496ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-02-05 23:14:18,820] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-05 23:14:18,820] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-05 23:14:18,838] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-05 23:14:18,870] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-05 23:14:18,870] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-05 23:14:18,872] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-05 23:14:18,876] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-02-05 23:14:18,886] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-05 23:14:18,898] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-02-05 23:14:18,898] INFO Started @656ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-05 23:14:18,898] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-02-05 23:14:18,902] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-05 23:14:18,902] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-05 23:14:18,904] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-05 23:14:18,905] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-05 23:14:18,920] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-05 23:14:18,921] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-05 23:14:18,922] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-05 23:14:18,922] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-05 23:14:18,926] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-02-05 23:14:18,926] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-05 23:14:18,928] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-05 23:14:18,929] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-05 23:14:18,930] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-05 23:14:18,937] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-02-05 23:14:18,937] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-02-05 23:14:18,955] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-02-05 23:14:18,956] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-02-05 23:14:20,174] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) mariadb | 2024-02-05 23:14:19+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-05 23:14:19+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-02-05 23:14:19+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-05 23:14:19+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-02-05 23:14:19 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-05 23:14:19 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-05 23:14:19 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-02-05 23:14:21+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-02-05 23:14:21+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-02-05 23:14:21+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-02-05 23:14:21 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-05 23:14:21 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-05 23:14:21 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-05 23:14:21 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-02-05 23:14:21 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-05 23:14:21 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-05 23:14:21 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-05 23:14:21 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-05 23:14:21 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-02-05 23:14:22+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-02-05 23:14:24+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-02-05 23:14:24+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-02-05 23:14:24+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-02-05 23:14:24+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-02-05 23:14:20,110] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,111] INFO Client environment:host.name=9f872bbe5af4 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,111] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,111] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,111] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,111] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,115] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,118] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-02-05 23:14:20,123] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-02-05 23:14:20,130] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:20,143] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:20,147] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:20,157] INFO Socket connection established, initiating session, client: /172.17.0.6:33642, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:20,193] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003b5f80000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:20,325] INFO Session: 0x1000003b5f80000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:20,326] INFO EventThread shut down for session: 0x1000003b5f80000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-02-05 23:14:21,003] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-02-05 23:14:21,321] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-02-05 23:14:21,395] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-02-05 23:14:21,397] INFO starting (kafka.server.KafkaServer) kafka | [2024-02-05 23:14:21,397] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-02-05 23:14:21,410] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-05 23:14:21,414] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) grafana | logger=settings t=2024-02-05T23:14:18.404997351Z level=info msg="Starting Grafana" version=10.3.1 commit=00a22ff8b28550d593ec369ba3da1b25780f0a4a branch=HEAD compiled=2024-01-22T18:40:42Z grafana | logger=settings t=2024-02-05T23:14:18.405266192Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-02-05T23:14:18.405281675Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-02-05T23:14:18.405310942Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-02-05T23:14:18.405323755Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-02-05T23:14:18.405327346Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-05T23:14:18.405353322Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-05T23:14:18.405359583Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-02-05T23:14:18.405367235Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-02-05T23:14:18.405373546Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-02-05T23:14:18.405377777Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-05T23:14:18.405381018Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-05T23:14:18.405384389Z level=info msg=Target target=[all] grafana | logger=settings t=2024-02-05T23:14:18.40539143Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-02-05T23:14:18.405394401Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-02-05T23:14:18.405398762Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-02-05T23:14:18.405401643Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-02-05T23:14:18.405405433Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-02-05T23:14:18.405438731Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-02-05T23:14:18.405774937Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-02-05T23:14:18.405796622Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-02-05T23:14:18.406499842Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-02-05T23:14:18.407507911Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-02-05T23:14:18.408324616Z level=info msg="Migration successfully executed" id="create migration_log table" duration=816.344µs grafana | logger=migrator t=2024-02-05T23:14:18.414372199Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-02-05T23:14:18.415202287Z level=info msg="Migration successfully executed" id="create user table" duration=829.688µs grafana | logger=migrator t=2024-02-05T23:14:18.418435081Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-02-05T23:14:18.419298856Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=862.075µs grafana | logger=migrator t=2024-02-05T23:14:18.422574299Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-02-05T23:14:18.423377492Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=802.793µs grafana | logger=migrator t=2024-02-05T23:14:18.429060592Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-02-05T23:14:18.429826265Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=767.374µs grafana | logger=migrator t=2024-02-05T23:14:18.432811363Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-02-05T23:14:18.433539808Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=728.275µs grafana | logger=migrator t=2024-02-05T23:14:18.436299534Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-02-05T23:14:18.439240241Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.939967ms grafana | logger=migrator t=2024-02-05T23:14:18.444638196Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-02-05T23:14:18.445476237Z level=info msg="Migration successfully executed" id="create user table v2" duration=834.16µs grafana | logger=migrator t=2024-02-05T23:14:18.448534551Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-02-05T23:14:18.449704776Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.167435ms grafana | logger=migrator t=2024-02-05T23:14:18.45302839Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-02-05T23:14:18.454196206Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.167385ms grafana | logger=migrator t=2024-02-05T23:14:18.459438075Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-02-05T23:14:18.460010835Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=573.12µs grafana | logger=migrator t=2024-02-05T23:14:18.462967856Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-02-05T23:14:18.463652481Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=686.646µs grafana | logger=migrator t=2024-02-05T23:14:18.466682379Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-02-05T23:14:18.467862427Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.181629ms grafana | logger=migrator t=2024-02-05T23:14:18.470979034Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-02-05T23:14:18.471024474Z level=info msg="Migration successfully executed" id="Update user table charset" duration=48.401µs grafana | logger=migrator t=2024-02-05T23:14:18.476450146Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-02-05T23:14:18.477732866Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.282841ms grafana | logger=migrator t=2024-02-05T23:14:18.480887212Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-02-05T23:14:18.481165225Z level=info msg="Migration successfully executed" id="Add missing user data" duration=277.873µs grafana | logger=migrator t=2024-02-05T23:14:18.484074075Z level=info msg="Executing migration" id="Add is_disabled column to user" kafka | [2024-02-05 23:14:21,414] INFO Client environment:host.name=9f872bbe5af4 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-02-05 23:14:24 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-02-05 23:14:24 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-02-05 23:14:24+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-02-05 23:14:24 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-02-05 23:14:24 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-02-05 23:14:24 0 [Note] InnoDB: Buffer pool(s) dump completed at 240205 23:14:24 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Shutdown completed; log sequence number 339701; transaction id 298 mariadb | 2024-02-05 23:14:25 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-02-05 23:14:25+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-02-05 23:14:25+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-02-05 23:14:25 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-05 23:14:25 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-05 23:14:25 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-05 23:14:25 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: log sequence number 339701; transaction id 299 mariadb | 2024-02-05 23:14:25 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-02-05 23:14:25 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-05 23:14:25 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-02-05 23:14:25 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-02-05 23:14:25 0 [Note] Server socket created on IP: '::'. mariadb | 2024-02-05 23:14:25 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Buffer pool(s) load completed at 240205 23:14:25 mariadb | 2024-02-05 23:14:25 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) mariadb | 2024-02-05 23:14:26 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) mariadb | 2024-02-05 23:14:26 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2024-02-05 23:14:26 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) grafana | logger=migrator t=2024-02-05T23:14:18.485212984Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.138799ms grafana | logger=migrator t=2024-02-05T23:14:18.488202752Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-02-05T23:14:18.488901241Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=698.088µs grafana | logger=migrator t=2024-02-05T23:14:18.49396311Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-02-05T23:14:18.495097847Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.134407ms grafana | logger=migrator t=2024-02-05T23:14:18.498140187Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-02-05T23:14:18.507827236Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.685529ms grafana | logger=migrator t=2024-02-05T23:14:18.510983212Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-02-05T23:14:18.511699924Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=716.343µs grafana | logger=migrator t=2024-02-05T23:14:18.514896549Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-02-05T23:14:18.515606591Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=704.931µs grafana | logger=migrator t=2024-02-05T23:14:18.520572917Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-02-05T23:14:18.521276908Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=703.56µs grafana | logger=migrator t=2024-02-05T23:14:18.524435725Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-02-05T23:14:18.525169411Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=732.706µs grafana | logger=migrator t=2024-02-05T23:14:18.530207864Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-02-05T23:14:18.530955324Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=747.059µs grafana | logger=migrator t=2024-02-05T23:14:18.534053567Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-02-05T23:14:18.534089625Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=32.047µs grafana | logger=migrator t=2024-02-05T23:14:18.538197598Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-02-05T23:14:18.538924362Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=732.036µs grafana | logger=migrator t=2024-02-05T23:14:18.541967483Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-02-05T23:14:18.542730726Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=763.403µs grafana | logger=migrator t=2024-02-05T23:14:18.548174981Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-02-05T23:14:18.54909204Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=920.679µs grafana | logger=migrator t=2024-02-05T23:14:18.552166207Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-02-05T23:14:18.55297313Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=809.004µs grafana | logger=migrator t=2024-02-05T23:14:18.558595266Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-05T23:14:18.562218058Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.621451ms grafana | logger=migrator t=2024-02-05T23:14:18.565478317Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-02-05T23:14:18.566374702Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=901.255µs grafana | logger=migrator t=2024-02-05T23:14:18.569348546Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-02-05T23:14:18.570298101Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=950.806µs grafana | logger=migrator t=2024-02-05T23:14:18.573323248Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-02-05T23:14:18.574214991Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=893.633µs grafana | logger=migrator t=2024-02-05T23:14:18.579520494Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-02-05T23:14:18.580445314Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=926.051µs grafana | logger=migrator t=2024-02-05T23:14:18.584124749Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-02-05T23:14:18.585052209Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=920.099µs grafana | logger=migrator t=2024-02-05T23:14:18.588085038Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-02-05T23:14:18.588542982Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=459.564µs grafana | logger=migrator t=2024-02-05T23:14:18.594819416Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-02-05T23:14:18.595770882Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=951.765µs grafana | logger=migrator t=2024-02-05T23:14:18.598945832Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-02-05T23:14:18.599577506Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=632.034µs grafana | logger=migrator t=2024-02-05T23:14:18.60263404Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-02-05T23:14:18.603313514Z level=info msg="Migration successfully executed" id="create star table" duration=679.594µs grafana | logger=migrator t=2024-02-05T23:14:18.606929494Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-02-05T23:14:18.607621681Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=693.527µs grafana | logger=migrator t=2024-02-05T23:14:18.612542738Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-02-05T23:14:18.613262402Z level=info msg="Migration successfully executed" id="create org table v1" duration=719.794µs grafana | logger=migrator t=2024-02-05T23:14:18.61753368Z level=info msg="Executing migration" id="create index UQE_org_name - v1" kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,416] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-05 23:14:21,419] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-02-05 23:14:21,424] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:21,431] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-05 23:14:21,432] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:21,438] INFO Socket connection established, initiating session, client: /172.17.0.6:33644, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:21,508] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003b5f80001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-05 23:14:21,513] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-05 23:14:21,783] INFO Cluster ID = GFmMeC8ERWyjG0XVKKQ9OQ (kafka.server.KafkaServer) kafka | [2024-02-05 23:14:21,786] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-02-05 23:14:21,832] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 grafana | logger=migrator t=2024-02-05T23:14:18.618913624Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.379273ms grafana | logger=migrator t=2024-02-05T23:14:18.622143717Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-02-05T23:14:18.623379917Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.241661ms grafana | logger=migrator t=2024-02-05T23:14:18.628408698Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-02-05T23:14:18.62925364Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=844.791µs grafana | logger=migrator t=2024-02-05T23:14:18.632095815Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-02-05T23:14:18.632915091Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=819.117µs grafana | logger=migrator t=2024-02-05T23:14:18.635764788Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-02-05T23:14:18.636580903Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=815.285µs grafana | logger=migrator t=2024-02-05T23:14:18.640016142Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-02-05T23:14:18.640080677Z level=info msg="Migration successfully executed" id="Update org table charset" duration=65.645µs grafana | logger=migrator t=2024-02-05T23:14:18.64573578Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-02-05T23:14:18.645760726Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=25.995µs grafana | logger=migrator t=2024-02-05T23:14:18.648701793Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-02-05T23:14:18.648943898Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=240.834µs grafana | logger=migrator t=2024-02-05T23:14:18.698442591Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-02-05T23:14:18.699825934Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.388555ms grafana | logger=migrator t=2024-02-05T23:14:18.705376003Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-02-05T23:14:18.706797687Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.421113ms grafana | logger=migrator t=2024-02-05T23:14:18.712038616Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-02-05T23:14:18.713181365Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.142118ms grafana | logger=migrator t=2024-02-05T23:14:18.717043122Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-02-05T23:14:18.717886003Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=845.531µs grafana | logger=migrator t=2024-02-05T23:14:18.721127188Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-02-05T23:14:18.721954826Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=827.327µs grafana | logger=migrator t=2024-02-05T23:14:18.725268857Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-02-05T23:14:18.726044634Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=775.687µs grafana | logger=migrator t=2024-02-05T23:14:18.730235195Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-02-05T23:14:18.736745753Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.504157ms grafana | logger=migrator t=2024-02-05T23:14:18.740049832Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-02-05T23:14:18.74079068Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=740.678µs grafana | logger=migrator t=2024-02-05T23:14:18.744997515Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-02-05T23:14:18.745775182Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=777.296µs grafana | logger=migrator t=2024-02-05T23:14:18.749333559Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-02-05T23:14:18.750114456Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=780.507µs grafana | logger=migrator t=2024-02-05T23:14:18.753298658Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-02-05T23:14:18.753669203Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=368.244µs grafana | logger=migrator t=2024-02-05T23:14:18.758215974Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-02-05T23:14:18.759316795Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.101461ms grafana | logger=migrator t=2024-02-05T23:14:18.763898674Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-02-05T23:14:18.764018251Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=115.706µs grafana | logger=migrator t=2024-02-05T23:14:18.767488369Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-02-05T23:14:18.768840065Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.351606ms grafana | logger=migrator t=2024-02-05T23:14:18.771848068Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-02-05T23:14:18.773143322Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.294875ms grafana | logger=migrator t=2024-02-05T23:14:18.777167235Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.778431451Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.263906ms grafana | logger=migrator t=2024-02-05T23:14:18.782187084Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.783032356Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=845.601µs grafana | logger=migrator t=2024-02-05T23:14:18.786376054Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.788226815Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.85035ms grafana | logger=migrator t=2024-02-05T23:14:18.792252228Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.793066673Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=814.215µs grafana | logger=migrator t=2024-02-05T23:14:18.796208936Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-02-05T23:14:18.797008067Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=798.711µs grafana | logger=migrator t=2024-02-05T23:14:18.80098559Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-02-05T23:14:18.801011266Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.786µs grafana | logger=migrator t=2024-02-05T23:14:18.804274066Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-02-05T23:14:18.804297722Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=24.495µs grafana | logger=migrator t=2024-02-05T23:14:18.806925038Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.808836051Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.910412ms grafana | logger=migrator t=2024-02-05T23:14:18.813764239Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.815672063Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.912984ms grafana | logger=migrator t=2024-02-05T23:14:18.819730313Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.821843073Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.112529ms grafana | logger=migrator t=2024-02-05T23:14:18.824882683Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.826846218Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.963126ms grafana | logger=migrator t=2024-02-05T23:14:18.830273556Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.830547798Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=273.242µs grafana | logger=migrator t=2024-02-05T23:14:18.834802614Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-02-05T23:14:18.836247601Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.433685ms grafana | logger=migrator t=2024-02-05T23:14:18.839934098Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-02-05T23:14:18.841153985Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.219487ms grafana | logger=migrator t=2024-02-05T23:14:18.844691858Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-02-05T23:14:18.844731047Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=46.05µs grafana | logger=migrator t=2024-02-05T23:14:18.847996767Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-02-05T23:14:18.848877918Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=880.541µs grafana | logger=migrator t=2024-02-05T23:14:18.852784384Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-02-05T23:14:18.853457427Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=672.492µs grafana | logger=migrator t=2024-02-05T23:14:18.856638138Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-05T23:14:18.863619443Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.976513ms grafana | logger=migrator t=2024-02-05T23:14:18.866932004Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-02-05T23:14:18.867614479Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=678.944µs grafana | logger=migrator t=2024-02-05T23:14:18.871515655Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-02-05T23:14:18.872847337Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.331101ms grafana | logger=migrator t=2024-02-05T23:14:18.882357405Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-02-05T23:14:18.883682326Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.32454ms grafana | logger=migrator t=2024-02-05T23:14:18.887026444Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-02-05T23:14:18.887432406Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=406.352µs grafana | logger=migrator t=2024-02-05T23:14:18.890797421Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-02-05T23:14:18.891291753Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=493.902µs grafana | logger=migrator t=2024-02-05T23:14:18.893800342Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-02-05T23:14:18.895316485Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.513873ms grafana | logger=migrator t=2024-02-05T23:14:18.897823184Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-02-05T23:14:18.898592159Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=769.774µs grafana | logger=migrator t=2024-02-05T23:14:18.902111318Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-02-05T23:14:18.902335249Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=223.66µs grafana | logger=migrator t=2024-02-05T23:14:18.905991598Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-02-05T23:14:18.906157766Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=163.507µs grafana | logger=migrator t=2024-02-05T23:14:18.908916952Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-02-05T23:14:18.909503345Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=586.343µs grafana | logger=migrator t=2024-02-05T23:14:18.912938194Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-02-05T23:14:18.914471232Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.532868ms grafana | logger=migrator t=2024-02-05T23:14:18.917277629Z level=info msg="Executing migration" id="create data_source table" policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.4:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.8:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.4) policy-api | policy-api | [2024-02-05T23:14:34.528+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-02-05T23:14:34.530+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-02-05T23:14:36.201+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-02-05T23:14:36.285+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 6 JPA repository interfaces. policy-api | [2024-02-05T23:14:36.691+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-02-05T23:14:36.692+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-02-05T23:14:37.291+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-02-05T23:14:37.299+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-02-05T23:14:37.301+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-02-05T23:14:37.302+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-api | [2024-02-05T23:14:37.395+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-02-05T23:14:37.395+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2805 ms policy-api | [2024-02-05T23:14:37.794+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-02-05T23:14:37.857+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-02-05T23:14:37.860+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-02-05T23:14:37.909+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-02-05T23:14:38.230+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-02-05T23:14:38.250+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-02-05T23:14:38.362+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 policy-api | [2024-02-05T23:14:38.364+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-02-05T23:14:38.396+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.4:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.6:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-02-05T23:14:58.404+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-02-05T23:14:58.559+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-447a3058-d755-46ac-8e2e-59b142489c6a-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 447a3058-d755-46ac-8e2e-59b142489c6a policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.5-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] policy-api | [2024-02-05T23:14:38.398+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-api | [2024-02-05T23:14:40.222+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-02-05T23:14:40.227+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-02-05T23:14:41.508+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-02-05T23:14:42.255+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-02-05T23:14:43.302+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-02-05T23:14:43.484+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@19a7e618, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@22ccd80f, org.springframework.security.web.context.SecurityContextHolderFilter@2f29400e, org.springframework.security.web.header.HeaderWriterFilter@56d3e4a9, org.springframework.security.web.authentication.logout.LogoutFilter@ab8b1ef, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@543d242e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@547a79cd, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@25e7e6d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@31829b82, org.springframework.security.web.access.ExceptionTranslationFilter@36c6d53b, org.springframework.security.web.access.intercept.AuthorizationFilter@680f7a5e] policy-api | [2024-02-05T23:14:44.330+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-02-05T23:14:44.391+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-02-05T23:14:44.423+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-02-05T23:14:44.440+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.63 seconds (process running for 11.197) policy-api | [2024-02-05T23:15:03.221+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-02-05T23:15:03.221+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-02-05T23:15:03.223+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-02-05T23:15:03.494+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-05T23:14:18.918194367Z level=info msg="Migration successfully executed" id="create data_source table" duration=915.898µs grafana | logger=migrator t=2024-02-05T23:14:18.921315265Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-02-05T23:14:18.922162948Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=847.433µs grafana | logger=migrator t=2024-02-05T23:14:18.926996394Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-02-05T23:14:18.927827143Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=830.419µs grafana | logger=migrator t=2024-02-05T23:14:18.93182706Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-02-05T23:14:18.932604447Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=777.226µs grafana | logger=migrator t=2024-02-05T23:14:18.936752758Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-02-05T23:14:18.937516491Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=759.392µs grafana | logger=migrator t=2024-02-05T23:14:18.941553478Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-02-05T23:14:18.9494875Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.939202ms grafana | logger=migrator t=2024-02-05T23:14:18.952560427Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-02-05T23:14:18.953386004Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=825.107µs grafana | logger=migrator t=2024-02-05T23:14:18.956398828Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-02-05T23:14:18.957358185Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=959.017µs grafana | logger=migrator t=2024-02-05T23:14:18.961303461Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-02-05T23:14:18.962070455Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=766.563µs grafana | logger=migrator t=2024-02-05T23:14:18.966089527Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-02-05T23:14:18.966692394Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=600.847µs grafana | logger=migrator t=2024-02-05T23:14:18.969778565Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-02-05T23:14:18.972077667Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.298572ms grafana | logger=migrator t=2024-02-05T23:14:18.975959337Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-02-05T23:14:18.978237095Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.276927ms grafana | logger=migrator t=2024-02-05T23:14:18.981436261Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-02-05T23:14:18.981488913Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=54.002µs grafana | logger=migrator t=2024-02-05T23:14:18.985126778Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-02-05T23:14:18.985536081Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=408.753µs grafana | logger=migrator t=2024-02-05T23:14:18.989948213Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-02-05T23:14:18.992562966Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.621986ms grafana | logger=migrator t=2024-02-05T23:14:18.995844511Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-02-05T23:14:18.996098569Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=257.668µs grafana | logger=migrator t=2024-02-05T23:14:18.998714812Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-02-05T23:14:18.998954877Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=239.684µs grafana | logger=migrator t=2024-02-05T23:14:19.002177378Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-02-05T23:14:19.004586789Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.410392ms grafana | logger=migrator t=2024-02-05T23:14:19.008889852Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-02-05T23:14:19.009158623Z level=info msg="Migration successfully executed" id="Update uid value" duration=268.231µs grafana | logger=migrator t=2024-02-05T23:14:19.01248309Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-02-05T23:14:19.013531729Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.048559ms grafana | logger=migrator t=2024-02-05T23:14:19.01678055Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-02-05T23:14:19.017673223Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=889.242µs grafana | logger=migrator t=2024-02-05T23:14:19.021587905Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-02-05T23:14:19.022467746Z level=info msg="Migration successfully executed" id="create api_key table" duration=885.233µs grafana | logger=migrator t=2024-02-05T23:14:19.025719357Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-02-05T23:14:19.026578102Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=858.625µs grafana | logger=migrator t=2024-02-05T23:14:19.029620486Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-02-05T23:14:19.030457316Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=836.29µs grafana | logger=migrator t=2024-02-05T23:14:19.034534796Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-02-05T23:14:19.035620133Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.084967ms grafana | logger=migrator t=2024-02-05T23:14:19.040494744Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-02-05T23:14:19.041282043Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=787.139µs grafana | logger=migrator t=2024-02-05T23:14:19.044201488Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-02-05T23:14:19.044990459Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=789.05µs policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-02-05T23:14:58.699+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-02-05T23:14:58.699+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-02-05T23:14:58.699+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174898698 policy-apex-pdp | [2024-02-05T23:14:58.701+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-1, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-02-05T23:14:58.713+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-02-05T23:14:58.713+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-02-05T23:14:58.719+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=447a3058-d755-46ac-8e2e-59b142489c6a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-02-05T23:14:58.738+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2 policy-apex-pdp | client.rack = kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:19.049163799Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-02-05T23:14:19.049948478Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=783.238µs grafana | logger=migrator t=2024-02-05T23:14:19.053014316Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-02-05T23:14:19.06171467Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.699933ms grafana | logger=migrator t=2024-02-05T23:14:19.065983742Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-02-05T23:14:19.066760659Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=776.397µs grafana | logger=migrator t=2024-02-05T23:14:19.069949155Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-02-05T23:14:19.070829306Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=879.891µs grafana | logger=migrator t=2024-02-05T23:14:19.074076406Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-02-05T23:14:19.074947844Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=871.048µs grafana | logger=migrator t=2024-02-05T23:14:19.079187791Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-02-05T23:14:19.080059939Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=877.12µs grafana | logger=migrator t=2024-02-05T23:14:19.083229332Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-02-05T23:14:19.083637474Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=409.163µs grafana | logger=migrator t=2024-02-05T23:14:19.086952411Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-02-05T23:14:19.087622233Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=674.073µs grafana | logger=migrator t=2024-02-05T23:14:19.091435432Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-02-05T23:14:19.091460147Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.586µs grafana | logger=migrator t=2024-02-05T23:14:19.094628229Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-02-05T23:14:19.09717912Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.54789ms grafana | logger=migrator t=2024-02-05T23:14:19.100111869Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-02-05T23:14:19.102595685Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.483016ms grafana | logger=migrator t=2024-02-05T23:14:19.105455656Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-02-05T23:14:19.105693121Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=236.974µs grafana | logger=migrator t=2024-02-05T23:14:19.109610474Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-02-05T23:14:19.112067844Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.45699ms grafana | logger=migrator t=2024-02-05T23:14:19.115361104Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-02-05T23:14:19.117919547Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.557793ms grafana | logger=migrator t=2024-02-05T23:14:19.121084808Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-02-05T23:14:19.121847392Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=762.153µs grafana | logger=migrator t=2024-02-05T23:14:19.125486621Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-02-05T23:14:19.126098301Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=611.321µs grafana | logger=migrator t=2024-02-05T23:14:19.152653962Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-02-05T23:14:19.153930712Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.275941ms grafana | logger=migrator t=2024-02-05T23:14:19.157461118Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-02-05T23:14:19.158853044Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.391267ms grafana | logger=migrator t=2024-02-05T23:14:19.163276213Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-02-05T23:14:19.164133908Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=856.855µs grafana | logger=migrator t=2024-02-05T23:14:19.167394071Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-02-05T23:14:19.168350629Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=955.189µs grafana | logger=migrator t=2024-02-05T23:14:19.17160355Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-02-05T23:14:19.171751554Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=147.193µs grafana | logger=migrator t=2024-02-05T23:14:19.175712856Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-02-05T23:14:19.175737282Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=25.366µs grafana | logger=migrator t=2024-02-05T23:14:19.178232931Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.4:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.6:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.9:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.7) policy-pap | policy-pap | [2024-02-05T23:14:47.742+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 35 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-02-05T23:14:47.744+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-02-05T23:14:49.556+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-02-05T23:14:49.661+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 7 JPA repository interfaces. policy-pap | [2024-02-05T23:14:50.122+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-02-05T23:14:50.123+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-02-05T23:14:50.841+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-02-05T23:14:50.851+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-02-05T23:14:50.853+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-02-05T23:14:50.854+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] grafana | logger=migrator t=2024-02-05T23:14:19.180936807Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.705516ms kafka | sasl.login.retry.backoff.ms = 100 policy-pap | [2024-02-05T23:14:50.944+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-apex-pdp | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-05T23:14:19.18406471Z level=info msg="Executing migration" id="Add encrypted dashboard json column" policy-db-migrator | Waiting for mariadb port 3306... kafka | sasl.mechanism.controller.protocol = GSSAPI simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json policy-pap | [2024-02-05T23:14:50.944+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3121 ms prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d policy-apex-pdp | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-02-05T23:14:19.18696528Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.900311ms policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | sasl.mechanism.inter.broker.protocol = GSSAPI simulator | overriding logback.xml policy-pap | [2024-02-05T23:14:51.370+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" policy-apex-pdp | enable.auto.commit = true grafana | logger=migrator t=2024-02-05T23:14:19.190439922Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | sasl.oauthbearer.clock.skew.seconds = 30 simulator | 2024-02-05 23:14:23,956 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-pap | [2024-02-05T23:14:51.461+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" policy-apex-pdp | exclude.internal.topics = true grafana | logger=migrator t=2024-02-05T23:14:19.19055829Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=118.168µs policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | sasl.oauthbearer.expected.audience = null simulator | 2024-02-05 23:14:24,035 INFO org.onap.policy.models.simulators starting policy-pap | [2024-02-05T23:14:51.465+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-apex-pdp | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-02-05T23:14:19.194556511Z level=info msg="Executing migration" id="create quota table v1" policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | sasl.oauthbearer.expected.issuer = null simulator | 2024-02-05 23:14:24,035 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties policy-pap | [2024-02-05T23:14:51.518+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" policy-apex-pdp | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-05T23:14:19.195141884Z level=info msg="Migration successfully executed" id="create quota table v1" duration=585.473µs policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 simulator | 2024-02-05 23:14:24,253 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION policy-pap | [2024-02-05T23:14:51.914+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-apex-pdp | fetch.min.bytes = 1 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-02-05T23:14:19.198470112Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 simulator | 2024-02-05 23:14:24,254 INFO org.onap.policy.models.simulators starting A&AI simulator policy-pap | [2024-02-05T23:14:51.936+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... prometheus | ts=2024-02-05T23:14:17.389Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 policy-apex-pdp | group.id = 447a3058-d755-46ac-8e2e-59b142489c6a policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! grafana | logger=migrator t=2024-02-05T23:14:19.199470031Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.002779ms kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 simulator | 2024-02-05 23:14:24,346 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-pap | [2024-02-05T23:14:52.050+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4068102e prometheus | ts=2024-02-05T23:14:17.390Z caller=main.go:1039 level=info msg="Starting TSDB ..." policy-apex-pdp | group.instance.id = null policy-db-migrator | 321 blocks grafana | logger=migrator t=2024-02-05T23:14:19.202644924Z level=info msg="Executing migration" id="Update quota table charset" kafka | sasl.oauthbearer.jwks.endpoint.url = null simulator | 2024-02-05 23:14:24,357 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-02-05T23:14:52.052+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. prometheus | ts=2024-02-05T23:14:17.394Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 policy-apex-pdp | heartbeat.interval.ms = 3000 policy-db-migrator | Preparing upgrade release version: 0800 grafana | logger=migrator t=2024-02-05T23:14:19.202676431Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=32.658µs kafka | sasl.oauthbearer.scope.claim.name = scope simulator | 2024-02-05 23:14:24,359 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-02-05T23:14:52.081+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) prometheus | ts=2024-02-05T23:14:17.396Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 policy-apex-pdp | interceptor.classes = [] policy-db-migrator | Preparing upgrade release version: 0900 grafana | logger=migrator t=2024-02-05T23:14:19.206906324Z level=info msg="Executing migration" id="create plugin_setting table" kafka | sasl.oauthbearer.sub.claim.name = sub simulator | 2024-02-05 23:14:24,366 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-pap | [2024-02-05T23:14:52.082+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-apex-pdp | internal.leave.group.on.close = true policy-db-migrator | Preparing upgrade release version: 1000 grafana | logger=migrator t=2024-02-05T23:14:19.207644843Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=731.237µs kafka | sasl.oauthbearer.token.endpoint.url = null simulator | 2024-02-05 23:14:24,423 INFO Session workerName=node0 policy-pap | [2024-02-05T23:14:53.952+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.401µs prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-db-migrator | Preparing upgrade release version: 1100 grafana | logger=migrator t=2024-02-05T23:14:19.210697269Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" kafka | sasl.server.callback.handler.class = null simulator | 2024-02-05 23:14:24,893 INFO Using GSON for REST calls policy-pap | [2024-02-05T23:14:53.956+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-db-migrator | Preparing upgrade release version: 1200 grafana | logger=migrator t=2024-02-05T23:14:19.212221386Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.523887ms kafka | sasl.server.max.receive.size = 524288 simulator | 2024-02-05 23:14:24,956 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} policy-pap | [2024-02-05T23:14:54.560+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-apex-pdp | isolation.level = read_uncommitted prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=122.428µs wal_replay_duration=287.356µs wbl_replay_duration=260ns total_replay_duration=485.851µs policy-db-migrator | Preparing upgrade release version: 1300 grafana | logger=migrator t=2024-02-05T23:14:19.215957948Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" kafka | security.inter.broker.protocol = PLAINTEXT simulator | 2024-02-05 23:14:24,969 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-pap | [2024-02-05T23:14:55.158+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer prometheus | ts=2024-02-05T23:14:17.402Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC policy-db-migrator | Done grafana | logger=migrator t=2024-02-05T23:14:19.222897609Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=6.941711ms kafka | security.providers = null simulator | 2024-02-05 23:14:24,977 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1568ms policy-pap | [2024-02-05T23:14:55.284+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-apex-pdp | max.partition.fetch.bytes = 1048576 prometheus | ts=2024-02-05T23:14:17.402Z caller=main.go:1063 level=info msg="TSDB started" policy-db-migrator | name version grafana | logger=migrator t=2024-02-05T23:14:19.226833815Z level=info msg="Executing migration" id="Update plugin_setting table charset" kafka | server.max.startup.time.ms = 9223372036854775807 simulator | 2024-02-05 23:14:24,977 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4382 ms. policy-pap | [2024-02-05T23:14:55.559+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | max.poll.interval.ms = 300000 prometheus | ts=2024-02-05T23:14:17.402Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-db-migrator | policyadmin 0 grafana | logger=migrator t=2024-02-05T23:14:19.226869323Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=40.079µs kafka | socket.connection.setup.timeout.max.ms = 30000 simulator | 2024-02-05 23:14:24,981 INFO org.onap.policy.models.simulators starting SDNC simulator policy-pap | allow.auto.create.topics = true policy-apex-pdp | max.poll.records = 500 prometheus | ts=2024-02-05T23:14:17.403Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.172308ms db_storage=1.35µs remote_storage=3.091µs web_handler=670ns query_engine=1.4µs scrape=221.39µs scrape_sd=116.137µs notify=23.865µs notify_sd=13.123µs rules=2.061µs tracing=5.501µs policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 grafana | logger=migrator t=2024-02-05T23:14:19.230175358Z level=info msg="Executing migration" id="create session table" kafka | socket.connection.setup.timeout.ms = 10000 simulator | 2024-02-05 23:14:24,983 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | metadata.max.age.ms = 300000 prometheus | ts=2024-02-05T23:14:17.403Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." policy-db-migrator | upgrade: 0 -> 1300 grafana | logger=migrator t=2024-02-05T23:14:19.231303364Z level=info msg="Migration successfully executed" id="create session table" duration=1.127667ms kafka | socket.listen.backlog.size = 50 simulator | 2024-02-05 23:14:24,984 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | metric.reporters = [] prometheus | ts=2024-02-05T23:14:17.404Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.235513984Z level=info msg="Executing migration" id="Drop old table playlist table" kafka | socket.receive.buffer.bytes = 102400 simulator | 2024-02-05 23:14:24,985 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | auto.offset.reset = latest policy-apex-pdp | metrics.num.samples = 2 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql grafana | logger=migrator t=2024-02-05T23:14:19.235623669Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=109.385µs kafka | socket.request.max.bytes = 104857600 simulator | 2024-02-05 23:14:24,986 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | metrics.recording.level = INFO policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.238968331Z level=info msg="Executing migration" id="Drop old table playlist_item table" kafka | socket.send.buffer.bytes = 102400 simulator | 2024-02-05 23:14:24,996 INFO Session workerName=node0 policy-pap | check.crcs = true policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) grafana | logger=migrator t=2024-02-05T23:14:19.239045269Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=77.457µs kafka | ssl.cipher.suites = [] simulator | 2024-02-05 23:14:25,073 INFO Using GSON for REST calls policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.243124858Z level=info msg="Executing migration" id="create playlist table v2" kafka | ssl.client.auth = none simulator | 2024-02-05 23:14:25,083 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} policy-pap | client.id = consumer-82113737-2238-440a-b31e-67419d0ce49a-1 policy-apex-pdp | receive.buffer.bytes = 65536 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.243822738Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=697.47µs kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] simulator | 2024-02-05 23:14:25,084 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} policy-pap | client.rack = policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.247251149Z level=info msg="Executing migration" id="create playlist item table v2" kafka | ssl.endpoint.identification.algorithm = https simulator | 2024-02-05 23:14:25,084 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1675ms policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-05T23:14:19.247977664Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=726.045µs kafka | ssl.engine.factory.class = null policy-pap | default.api.timeout.ms = 60000 simulator | 2024-02-05 23:14:25,084 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4900 ms. policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.251424839Z level=info msg="Executing migration" id="Update playlist table charset" kafka | ssl.key.password = null policy-pap | enable.auto.commit = true simulator | 2024-02-05 23:14:25,086 INFO org.onap.policy.models.simulators starting SO simulator policy-apex-pdp | retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) grafana | logger=migrator t=2024-02-05T23:14:19.251449935Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.867µs kafka | ssl.keymanager.algorithm = SunX509 policy-pap | exclude.internal.topics = true simulator | 2024-02-05 23:14:25,089 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | sasl.client.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.25568357Z level=info msg="Executing migration" id="Update playlist_item table charset" kafka | ssl.keystore.certificate.chain = null policy-pap | fetch.max.bytes = 52428800 simulator | 2024-02-05 23:14:25,090 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | sasl.jaas.config = null policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.255710026Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.436µs kafka | ssl.keystore.key = null policy-pap | fetch.max.wait.ms = 500 simulator | 2024-02-05 23:14:25,092 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.25906462Z level=info msg="Executing migration" id="Add playlist column created_at" kafka | ssl.keystore.location = null policy-pap | fetch.min.bytes = 1 simulator | 2024-02-05 23:14:25,093 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql grafana | logger=migrator t=2024-02-05T23:14:19.262114345Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.049495ms kafka | ssl.keystore.password = null policy-pap | group.id = 82113737-2238-440a-b31e-67419d0ce49a simulator | 2024-02-05 23:14:25,107 INFO Session workerName=node0 policy-apex-pdp | sasl.kerberos.service.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.265545147Z level=info msg="Executing migration" id="Add playlist column updated_at" kafka | ssl.keystore.type = JKS policy-pap | group.instance.id = null simulator | 2024-02-05 23:14:25,180 INFO Using GSON for REST calls policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) grafana | logger=migrator t=2024-02-05T23:14:19.268627469Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.081973ms kafka | ssl.principal.mapping.rules = DEFAULT policy-pap | heartbeat.interval.ms = 3000 simulator | 2024-02-05 23:14:25,192 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.272044609Z level=info msg="Executing migration" id="drop preferences table v2" kafka | ssl.protocol = TLSv1.3 policy-pap | interceptor.classes = [] simulator | 2024-02-05 23:14:25,193 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} policy-db-migrator | kafka | ssl.provider = null policy-pap | internal.leave.group.on.close = true simulator | 2024-02-05 23:14:25,193 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @1785ms policy-apex-pdp | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-02-05T23:14:19.272126007Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=81.719µs policy-db-migrator | policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false simulator | 2024-02-05 23:14:25,194 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4899 ms. policy-apex-pdp | sasl.login.class = null grafana | logger=migrator t=2024-02-05T23:14:19.276285094Z level=info msg="Executing migration" id="drop preferences table v3" kafka | ssl.secure.random.implementation = null policy-pap | isolation.level = read_uncommitted simulator | 2024-02-05 23:14:25,195 INFO org.onap.policy.models.simulators starting VFC simulator policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-02-05T23:14:19.276365533Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=84.839µs policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql kafka | ssl.trustmanager.algorithm = PKIX simulator | 2024-02-05 23:14:25,199 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-05T23:14:19.278978718Z level=info msg="Executing migration" id="create preferences table v3" policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | ssl.truststore.certificates = null simulator | 2024-02-05 23:14:25,200 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-05T23:14:19.279744763Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=765.574µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-pap | max.partition.fetch.bytes = 1048576 kafka | ssl.truststore.location = null policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-05T23:14:19.285084409Z level=info msg="Executing migration" id="Update preferences table charset" policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 simulator | 2024-02-05 23:14:25,206 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | ssl.truststore.password = null grafana | logger=migrator t=2024-02-05T23:14:19.285126509Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=53.013µs policy-db-migrator | policy-pap | max.poll.records = 500 simulator | 2024-02-05 23:14:25,207 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 kafka | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-05T23:14:19.289799094Z level=info msg="Executing migration" id="Add column team_id in preferences" policy-db-migrator | policy-pap | metadata.max.age.ms = 300000 simulator | 2024-02-05 23:14:25,215 INFO Session workerName=node0 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:19.294506367Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.69151ms policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-pap | metric.reporters = [] simulator | 2024-02-05 23:14:25,256 INFO Using GSON for REST calls policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | transaction.max.timeout.ms = 900000 grafana | logger=migrator t=2024-02-05T23:14:19.297815721Z level=info msg="Executing migration" id="Update team_id column values in preferences" policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 simulator | 2024-02-05 23:14:25,264 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} policy-apex-pdp | sasl.login.retry.backoff.ms = 100 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 grafana | logger=migrator t=2024-02-05T23:14:19.29803027Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=214.248µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-pap | metrics.recording.level = INFO simulator | 2024-02-05 23:14:25,265 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} policy-apex-pdp | sasl.mechanism = GSSAPI kafka | transaction.state.log.load.buffer.size = 5242880 grafana | logger=migrator t=2024-02-05T23:14:19.301502531Z level=info msg="Executing migration" id="Add column week_start in preferences" policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 simulator | 2024-02-05 23:14:25,265 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @1856ms policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | transaction.state.log.min.isr = 2 grafana | logger=migrator t=2024-02-05T23:14:19.30457216Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.06904ms policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] simulator | 2024-02-05 23:14:25,265 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4941 ms. policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | transaction.state.log.num.partitions = 50 policy-db-migrator | policy-pap | receive.buffer.bytes = 65536 simulator | 2024-02-05 23:14:25,266 INFO org.onap.policy.models.simulators started policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-05T23:14:19.307957421Z level=info msg="Executing migration" id="Add column preferences.json_data" kafka | transaction.state.log.replication.factor = 3 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-pap | reconnect.backoff.max.ms = 1000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-05T23:14:19.311069361Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.11151ms kafka | transaction.state.log.segment.bytes = 104857600 policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:19.3152326Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" kafka | transactional.id.expiration.ms = 604800000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-pap | request.timeout.ms = 30000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-05T23:14:19.315300505Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=68.306µs kafka | unclean.leader.election.enable = false policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-05T23:14:19.318671823Z level=info msg="Executing migration" id="Add preferences index org_id" kafka | unstable.api.versions.enable = false policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-05T23:14:19.31961954Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=947.466µs kafka | zookeeper.clientCnxnSocket = null policy-db-migrator | policy-pap | sasl.jaas.config = null policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-05T23:14:19.323139412Z level=info msg="Executing migration" id="Add preferences index user_id" kafka | zookeeper.connect = zookeeper:2181 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-05T23:14:19.324017591Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=877.85µs kafka | zookeeper.connection.timeout.ms = null policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-05T23:14:19.328015093Z level=info msg="Executing migration" id="create alert table v1" kafka | zookeeper.max.in.flight.requests = 10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.kerberos.service.name = null policy-apex-pdp | security.providers = null kafka | zookeeper.metadata.migration.enable = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.329073284Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.057741ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | send.buffer.bytes = 131072 kafka | zookeeper.session.timeout.ms = 18000 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.332219511Z level=info msg="Executing migration" id="add index alert org_id & id " policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | session.timeout.ms = 45000 kafka | zookeeper.set.acl = false policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.333209346Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=984.274µs policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | zookeeper.ssl.cipher.suites = null policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql grafana | logger=migrator t=2024-02-05T23:14:19.337477909Z level=info msg="Executing migration" id="add index alert state" policy-pap | sasl.login.class = null policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | zookeeper.ssl.client.enable = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.338368301Z level=info msg="Migration successfully executed" id="add index alert state" duration=890.023µs policy-apex-pdp | ssl.cipher.suites = null kafka | zookeeper.ssl.crl.enable = false policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) grafana | logger=migrator t=2024-02-05T23:14:19.353767411Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | zookeeper.ssl.enabled.protocols = null policy-db-migrator | -------------- policy-db-migrator | policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-02-05T23:14:19.354444835Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=677.954µs policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null policy-apex-pdp | ssl.engine.factory.class = null kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-02-05T23:14:19.357345556Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | ssl.key.password = null kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-02-05T23:14:19.35793294Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=586.953µs policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-05T23:14:19.362036335Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | zookeeper.ssl.keystore.type = null policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-05T23:14:19.363449467Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.412601ms policy-db-migrator | -------------- kafka | zookeeper.ssl.ocsp.enable = false policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | ssl.keystore.key = null grafana | logger=migrator t=2024-02-05T23:14:19.366648046Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-db-migrator | kafka | zookeeper.ssl.protocol = TLSv1.2 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | ssl.keystore.location = null grafana | logger=migrator t=2024-02-05T23:14:19.367403969Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=755.833µs policy-db-migrator | kafka | zookeeper.ssl.truststore.location = null policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS kafka | zookeeper.ssl.truststore.password = null policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-05T23:14:19.376102371Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null kafka | zookeeper.ssl.truststore.type = null policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-05T23:14:19.386873185Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.769354ms policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | (kafka.server.KafkaConfig) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-05T23:14:19.391641221Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null kafka | [2024-02-05 23:14:21,861] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-05T23:14:19.392393513Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=756.493µs policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS kafka | [2024-02-05 23:14:21,862] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-05T23:14:19.39663971Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | kafka | [2024-02-05 23:14:21,865] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:19.397350322Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=710.272µs policy-apex-pdp | [2024-02-05T23:14:58.746+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-02-05T23:14:58.746+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-02-05 23:14:21,866] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-05T23:14:19.400684253Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" policy-apex-pdp | [2024-02-05T23:14:58.746+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174898746 policy-apex-pdp | [2024-02-05T23:14:58.746+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Subscribed to topic(s): policy-pdp-pap kafka | [2024-02-05 23:14:21,892] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-05T23:14:19.401031612Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=347.339µs policy-apex-pdp | [2024-02-05T23:14:58.751+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=331ed2f3-3c3d-4edb-a439-5458d1b7d3bd, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-02-05T23:14:58.764+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-02-05 23:14:21,895] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-05T23:14:19.404677752Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true kafka | [2024-02-05 23:14:21,904] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-05T23:14:19.405915144Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.239952ms policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] kafka | [2024-02-05 23:14:21,906] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-05T23:14:19.412062515Z level=info msg="Executing migration" id="create alert_notification table v1" policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips kafka | [2024-02-05 23:14:21,907] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-05T23:14:19.412890344Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=827.908µs policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none kafka | [2024-02-05 23:14:21,916] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-pap | security.providers = null grafana | logger=migrator t=2024-02-05T23:14:19.41713245Z level=info msg="Executing migration" id="Add column is_default" policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 kafka | [2024-02-05 23:14:21,958] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-05T23:14:19.420834894Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.702255ms policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] kafka | [2024-02-05 23:14:21,972] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-02-05T23:14:19.42520842Z level=info msg="Executing migration" id="Add column frequency" policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 kafka | [2024-02-05 23:14:21,983] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:19.42884857Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.63956ms policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 kafka | [2024-02-05 23:14:22,007] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | max.request.size = 1048576 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql kafka | [2024-02-05 23:14:22,417] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-02-05T23:14:19.438567615Z level=info msg="Executing migration" id="Add column send_reminder" policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-02-05 23:14:22,444] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-05T23:14:19.444028059Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.462955ms policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-05T23:14:19.447596652Z level=info msg="Executing migration" id="Add column disable_resolve_message" kafka | [2024-02-05 23:14:22,444] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-apex-pdp | metadata.max.age.ms = 300000 policy-db-migrator | policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-05T23:14:19.451111984Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.521793ms kafka | [2024-02-05 23:14:22,450] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-05T23:14:19.455635724Z level=info msg="Executing migration" id="add index alert_notification org_id & name" kafka | [2024-02-05 23:14:22,454] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-apex-pdp | metrics.num.samples = 2 policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-05T23:14:19.456646715Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.0102ms kafka | [2024-02-05 23:14:22,472] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-05T23:14:19.461028453Z level=info msg="Executing migration" id="Update alert table charset" kafka | [2024-02-05 23:14:22,474] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-02-05T23:14:19.461054339Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=27.247µs kafka | [2024-02-05 23:14:22,478] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | policy-db-migrator | policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-02-05T23:14:19.466641772Z level=info msg="Executing migration" id="Update alert_notification table charset" kafka | [2024-02-05 23:14:22,481] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-02-05T23:14:19.466711268Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=70.576µs kafka | [2024-02-05 23:14:22,495] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-05T23:14:19.472091574Z level=info msg="Executing migration" id="create notification_journal table v1" kafka | [2024-02-05 23:14:22,517] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-db-migrator | policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-05T23:14:19.472926814Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=836.31µs kafka | [2024-02-05 23:14:22,551] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1707174862533,1707174862533,1,0,0,72057609975758849,258,0,27 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-pap | ssl.provider = null grafana | logger=migrator t=2024-02-05T23:14:19.476292082Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" kafka | (kafka.zk.KafkaZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-05T23:14:19.477494775Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.201784ms kafka | [2024-02-05 23:14:22,552] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-db-migrator | policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-02-05T23:14:19.481984279Z level=info msg="Executing migration" id="drop alert_notification_journal" kafka | [2024-02-05 23:14:22,602] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-02-05T23:14:19.483589484Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.605765ms kafka | [2024-02-05 23:14:22,612] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-05T23:14:19.487979735Z level=info msg="Executing migration" id="create alert_notification_state table v1" kafka | [2024-02-05 23:14:22,620] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | policy-db-migrator | policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-02-05T23:14:19.488845622Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=861.846µs kafka | [2024-02-05 23:14:22,621] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-05T23:14:19.494097969Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" kafka | [2024-02-05 23:14:22,625] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-05T23:14:19.494808241Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=710.452µs kafka | [2024-02-05 23:14:22,643] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-db-migrator | policy-db-migrator | policy-pap | grafana | logger=migrator t=2024-02-05T23:14:19.49809857Z level=info msg="Executing migration" id="Add for to alert table" kafka | [2024-02-05 23:14:22,646] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:55.730+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-02-05T23:14:19.502520578Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.421738ms kafka | [2024-02-05 23:14:22,648] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:55.731+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-02-05T23:14:19.510351463Z level=info msg="Executing migration" id="Add column uid in alert_notification" kafka | [2024-02-05 23:14:22,650] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | policy-db-migrator | policy-pap | [2024-02-05T23:14:55.731+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174895728 kafka | [2024-02-05 23:14:22,652] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-02-05T23:14:19.514981167Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.629505ms policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:55.733+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-1, groupId=82113737-2238-440a-b31e-67419d0ce49a] Subscribed to topic(s): policy-pdp-pap kafka | [2024-02-05 23:14:22,666] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-02-05T23:14:19.520862157Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:55.734+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-02-05 23:14:22,670] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-02-05T23:14:19.521167378Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=299.519µs policy-db-migrator | policy-db-migrator | policy-pap | allow.auto.create.topics = true kafka | [2024-02-05 23:14:22,671] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) grafana | logger=migrator t=2024-02-05T23:14:19.526203044Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-02-05 23:14:22,681] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) grafana | logger=migrator t=2024-02-05T23:14:19.528117111Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.913976ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true kafka | [2024-02-05 23:14:22,682] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.533226215Z level=info msg="Executing migration" id="Remove unique index org_id_name" policy-db-migrator | policy-db-migrator | policy-pap | auto.offset.reset = latest kafka | [2024-02-05 23:14:22,690] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.534085961Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=859.945µs policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-02-05 23:14:22,694] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.539736579Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | check.crcs = true kafka | [2024-02-05 23:14:22,700] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-apex-pdp | metrics.recording.level = INFO policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-02-05 23:14:22,706] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-05T23:14:19.54364621Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.909251ms policy-db-migrator | policy-apex-pdp | metrics.sample.window.ms = 30000 policy-pap | client.id = consumer-policy-pap-2 kafka | [2024-02-05 23:14:22,713] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.546654906Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-pap | client.rack = kafka | [2024-02-05 23:14:22,718] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.546726812Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=72.587µs policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-05 23:14:22,723] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) grafana | logger=migrator t=2024-02-05T23:14:19.55155077Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-db-migrator | -------------- policy-db-migrator | policy-pap | default.api.timeout.ms = 60000 kafka | [2024-02-05 23:14:22,731] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-02-05T23:14:19.552229386Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=678.556µs policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-pap | enable.auto.commit = true kafka | [2024-02-05 23:14:22,733] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.556033562Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | exclude.internal.topics = true kafka | [2024-02-05 23:14:22,733] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.556739153Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=705.261µs policy-db-migrator | -------------- policy-db-migrator | policy-pap | fetch.max.bytes = 52428800 kafka | [2024-02-05 23:14:22,733] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.559460843Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-05T23:14:19.559543912Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=83.039µs kafka | [2024-02-05 23:14:22,734] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-02-05T23:14:19.562370816Z level=info msg="Executing migration" id="create annotation table v5" kafka | [2024-02-05 23:14:22,735] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-db-migrator | -------------- policy-db-migrator | policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-02-05T23:14:19.563159567Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=785.53µs kafka | [2024-02-05 23:14:22,737] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-pap | group.instance.id = null grafana | logger=migrator t=2024-02-05T23:14:19.566952851Z level=info msg="Executing migration" id="add index annotation 0 v3" kafka | [2024-02-05 23:14:22,737] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-02-05T23:14:19.567772757Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=819.637µs kafka | [2024-02-05 23:14:22,738] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-db-migrator | policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-05T23:14:19.572149695Z level=info msg="Executing migration" id="add index annotation 1 v3" kafka | [2024-02-05 23:14:22,738] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-db-migrator | policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-02-05T23:14:19.573761912Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.616247ms kafka | [2024-02-05 23:14:22,740] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-05T23:14:19.632265704Z level=info msg="Executing migration" id="add index annotation 2 v3" kafka | [2024-02-05 23:14:22,743] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-02-05T23:14:19.633920421Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.657658ms kafka | [2024-02-05 23:14:22,745] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-05T23:14:19.639494681Z level=info msg="Executing migration" id="add index annotation 3 v3" kafka | [2024-02-05 23:14:22,749] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-05T23:14:19.640694684Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.200993ms kafka | [2024-02-05 23:14:22,749] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-05T23:14:19.643540743Z level=info msg="Executing migration" id="add index annotation 4 v3" kafka | [2024-02-05 23:14:22,750] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-02-05T23:14:19.644691876Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.111404ms kafka | [2024-02-05 23:14:22,752] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-05T23:14:19.647464578Z level=info msg="Executing migration" id="Update annotation table charset" kafka | [2024-02-05 23:14:22,753] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-05T23:14:19.647493424Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=34.198µs kafka | [2024-02-05 23:14:22,754] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-05T23:14:19.653843991Z level=info msg="Executing migration" id="Add column region_id to annotation table" kafka | [2024-02-05 23:14:22,754] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-05T23:14:19.660991369Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=7.139237ms kafka | [2024-02-05 23:14:22,755] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:19.664297904Z level=info msg="Executing migration" id="Drop category_id index" kafka | [2024-02-05 23:14:22,757] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-02-05T23:14:19.665119681Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=821.717µs kafka | [2024-02-05 23:14:22,757] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-05T23:14:19.668104341Z level=info msg="Executing migration" id="Add column tags to annotation table" kafka | [2024-02-05 23:14:22,757] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-05T23:14:19.6731474Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.041209ms kafka | [2024-02-05 23:14:22,766] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-05T23:14:19.682658107Z level=info msg="Executing migration" id="Create annotation_tag table v2" kafka | [2024-02-05 23:14:22,768] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:19.68350692Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=852.694µs kafka | [2024-02-05 23:14:22,768] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-05T23:14:19.686958637Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" kafka | [2024-02-05 23:14:22,768] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-05T23:14:19.687753128Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=791.54µs kafka | [2024-02-05 23:14:22,769] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-05T23:14:19.693295631Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" kafka | [2024-02-05 23:14:22,777] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-05T23:14:19.694367015Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.072064ms kafka | [2024-02-05 23:14:22,777] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-02-05T23:14:19.697849629Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" kafka | [2024-02-05 23:14:22,777] INFO Kafka startTimeMs: 1707174862770 (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-02-05T23:14:19.714397959Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.543559ms kafka | [2024-02-05 23:14:22,779] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-05T23:14:19.718069797Z level=info msg="Executing migration" id="Create annotation_tag table v3" kafka | [2024-02-05 23:14:22,786] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-02-05T23:14:19.718729597Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=659.54µs kafka | [2024-02-05 23:14:22,844] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | security.protocol = PLAINTEXT kafka | [2024-02-05 23:14:22,860] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-02-05T23:14:19.723913668Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-db-migrator | policy-pap | sasl.login.class = null policy-apex-pdp | security.providers = null kafka | [2024-02-05 23:14:22,874] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-02-05T23:14:19.725050527Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.140619ms policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-05 23:14:27,788] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.730871343Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-05T23:14:19.731548108Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=681.266µs policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-02-05 23:14:27,789] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-05 23:14:58,039] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-02-05T23:14:19.73630018Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-05T23:14:19.737233554Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=933.423µs policy-apex-pdp | ssl.cipher.suites = null kafka | [2024-02-05 23:14:58,043] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-05 23:14:58,046] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.740449757Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | [2024-02-05 23:14:58,052] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.740620355Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=170.689µs policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | ssl.engine.factory.class = null kafka | [2024-02-05 23:14:58,082] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(u46pnWTBR6-v7DJLPWifgQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.745909621Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | ssl.key.password = null kafka | [2024-02-05 23:14:58,083] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-05T23:14:19.749992551Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.077079ms policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-05 23:14:58,085] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.75700985Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-05T23:14:19.76100212Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.992129ms kafka | [2024-02-05 23:14:58,085] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | ssl.keystore.key = null grafana | logger=migrator t=2024-02-05T23:14:19.763818151Z level=info msg="Executing migration" id="Add index for created in annotation table" kafka | [2024-02-05 23:14:58,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | ssl.keystore.location = null grafana | logger=migrator t=2024-02-05T23:14:19.764449075Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=630.243µs kafka | [2024-02-05 23:14:58,088] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-02-05T23:14:19.768905751Z level=info msg="Executing migration" id="Add index for updated in annotation table" kafka | [2024-02-05 23:14:58,108] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-05T23:14:19.769512639Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=606.598µs kafka | [2024-02-05 23:14:58,110] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-05T23:14:19.776067163Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" kafka | [2024-02-05 23:14:58,111] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | ssl.provider = null grafana | logger=migrator t=2024-02-05T23:14:19.77640594Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=339.688µs kafka | [2024-02-05 23:14:58,113] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-05T23:14:19.781642303Z level=info msg="Executing migration" id="Add epoch_end column" kafka | [2024-02-05 23:14:58,113] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-02-05T23:14:19.78886935Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.214564ms kafka | [2024-02-05 23:14:58,114] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | ssl.truststore.certificates = null grafana | logger=migrator t=2024-02-05T23:14:19.797089623Z level=info msg="Executing migration" id="Add index for epoch_end" kafka | [2024-02-05 23:14:58,118] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-pap | security.protocol = PLAINTEXT policy-apex-pdp | ssl.truststore.location = null grafana | logger=migrator t=2024-02-05T23:14:19.797854517Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=766.555µs kafka | [2024-02-05 23:14:58,125] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.providers = null grafana | logger=migrator t=2024-02-05T23:14:19.800842488Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" kafka | [2024-02-05 23:14:58,128] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(csBd2HU8Tmiot-5BjYrBHg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-apex-pdp | ssl.truststore.password = null policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-05T23:14:19.801096776Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=261.409µs policy-apex-pdp | ssl.truststore.type = JKS kafka | [2024-02-05 23:14:58,128] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-02-05T23:14:19.811022288Z level=info msg="Executing migration" id="Move region to single row" policy-apex-pdp | transaction.timeout.ms = 60000 kafka | [2024-02-05 23:14:58,128] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:19.811599659Z level=info msg="Migration successfully executed" id="Move region to single row" duration=577.822µs policy-apex-pdp | transactional.id = null kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:19.816923873Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-02-05T23:14:19.818280873Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.35699ms policy-apex-pdp | kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-05T23:14:19.824397766Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-apex-pdp | [2024-02-05T23:14:58.773+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-05T23:14:19.825199579Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=801.682µs policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-05T23:14:19.828446608Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-05T23:14:19.829366849Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=919.43µs kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174898815 policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-05T23:14:19.837945034Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=331ed2f3-3c3d-4edb-a439-5458d1b7d3bd, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-05T23:14:19.839041953Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.09704ms kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|ServiceManager|main] service manager starting set alive policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-02-05T23:14:19.842005608Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-02-05T23:14:19.842975059Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=968.76µs policy-db-migrator | policy-apex-pdp | [2024-02-05T23:14:58.818+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-pap | ssl.keystore.password = null kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.847620968Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-db-migrator | policy-apex-pdp | [2024-02-05T23:14:58.818+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-pap | ssl.keystore.type = JKS kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.848503719Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=875.949µs policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.853575365Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.853714656Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=138.942µs policy-pap | ssl.provider = null policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher grafana | logger=migrator t=2024-02-05T23:14:19.857981799Z level=info msg="Executing migration" id="create test_data table" policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=447a3058-d755-46ac-8e2e-59b142489c6a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-05T23:14:19.859048132Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.064442ms kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=447a3058-d755-46ac-8e2e-59b142489c6a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted grafana | logger=migrator t=2024-02-05T23:14:19.864387259Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-pap | ssl.truststore.certificates = null policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|ServiceManager|main] service manager starting Create REST server kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.865645515Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.259767ms policy-apex-pdp | [2024-02-05T23:14:58.846+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-05T23:14:19.871644773Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-apex-pdp | [] kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.872595719Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=950.597µs policy-apex-pdp | [2024-02-05T23:14:58.852+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-05T23:14:19.877500737Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e3e0d441-6486-4a92-a10b-385c33c9d2d1","timestampMs":1707174898823,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-02-05T23:14:19.879173448Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.67202ms policy-pap | policy-apex-pdp | [2024-02-05T23:14:59.024+00:00|INFO|ServiceManager|main] service manager starting Rest Server kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.882336088Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-apex-pdp | [2024-02-05T23:14:59.024+00:00|INFO|ServiceManager|main] service manager starting kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-02-05T23:14:55.741+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-02-05T23:14:59.025+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.882812538Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=475.759µs policy-db-migrator | policy-pap | [2024-02-05T23:14:55.741+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-02-05T23:14:59.025+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.888751041Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-db-migrator | policy-pap | [2024-02-05T23:14:55.741+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174895741 policy-apex-pdp | [2024-02-05T23:14:59.035+00:00|INFO|ServiceManager|main] service manager started kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.889267098Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=515.828µs policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-pap | [2024-02-05T23:14:55.741+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-02-05T23:14:59.035+00:00|INFO|ServiceManager|main] service manager started kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.896947109Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-db-migrator | -------------- policy-apex-pdp | [2024-02-05T23:14:59.035+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-02-05T23:14:56.085+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json grafana | logger=migrator t=2024-02-05T23:14:19.897051892Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=105.814µs kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.035+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-02-05T23:14:56.277+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) grafana | logger=migrator t=2024-02-05T23:14:19.902772206Z level=info msg="Executing migration" id="create team table" kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.162+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ policy-pap | [2024-02-05T23:14:56.523+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2c1a95a2, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1adf387e, org.springframework.security.web.context.SecurityContextHolderFilter@3909308c, org.springframework.security.web.header.HeaderWriterFilter@2e2cd42c, org.springframework.security.web.authentication.logout.LogoutFilter@4af44f2a, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5020e5ab, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3b6c740b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@78f4d15d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@72b53f27, org.springframework.security.web.access.ExceptionTranslationFilter@581d5b33, org.springframework.security.web.access.intercept.AuthorizationFilter@7db2b614] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.90437172Z level=info msg="Migration successfully executed" id="create team table" duration=1.591102ms kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.163+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ policy-pap | [2024-02-05T23:14:57.357+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.909004036Z level=info msg="Executing migration" id="add index team.org_id" kafka | [2024-02-05 23:14:58,131] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.164+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-02-05T23:14:57.452+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.910045583Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.041307ms kafka | [2024-02-05 23:14:58,131] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.165+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-pap | [2024-02-05T23:14:57.468+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql grafana | logger=migrator t=2024-02-05T23:14:19.915456276Z level=info msg="Executing migration" id="add unique index team_org_id_name" kafka | [2024-02-05 23:14:58,131] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.169+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] (Re-)joining group policy-pap | [2024-02-05T23:14:57.484+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.917165046Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.706119ms kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.180+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Request joining group due to: need to re-join with the given member-id: consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d policy-pap | [2024-02-05T23:14:57.484+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-05T23:14:19.922302216Z level=info msg="Executing migration" id="Add column uid in team" kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.180+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-02-05T23:14:57.484+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:19.927093788Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.792361ms kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.180+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] (Re-)joining group policy-pap | [2024-02-05T23:14:57.485+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.932850699Z level=info msg="Executing migration" id="Update uid column values in team" kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.629+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-pap | [2024-02-05T23:14:57.485+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:19.93311297Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=288.286µs kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:14:59.629+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-pap | [2024-02-05T23:14:57.485+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher grafana | logger=migrator t=2024-02-05T23:14:19.939233275Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-apex-pdp | [2024-02-05T23:15:02.185+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d', protocol='range'} policy-pap | [2024-02-05T23:14:57.486+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-02-05T23:15:02.196+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Finished assignment for group at generation 1: {consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-02-05T23:14:57.492+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=82113737-2238-440a-b31e-67419d0ce49a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@509e4902 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.940609158Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.374313ms policy-apex-pdp | [2024-02-05T23:15:02.207+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d', protocol='range'} policy-pap | [2024-02-05T23:14:57.502+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=82113737-2238-440a-b31e-67419d0ce49a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.944908248Z level=info msg="Executing migration" id="create team member table" policy-apex-pdp | [2024-02-05T23:15:02.208+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-02-05T23:14:57.502+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.946005357Z level=info msg="Migration successfully executed" id="create team member table" duration=1.096359ms policy-apex-pdp | [2024-02-05T23:15:02.210+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.951992902Z level=info msg="Executing migration" id="add index team_member.org_id" policy-pap | allow.auto.create.topics = true policy-apex-pdp | [2024-02-05T23:15:02.219+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.952921024Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=931.043µs policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-apex-pdp | [2024-02-05T23:15:02.232+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.959854763Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- policy-apex-pdp | [2024-02-05T23:15:18.821+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.961499559Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.643945ms policy-pap | auto.offset.reset = latest policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce579082-502e-4e58-9380-d7baab3a6748","timestampMs":1707174918820,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.966007915Z level=info msg="Executing migration" id="add index team_member.team_id" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- policy-apex-pdp | [2024-02-05T23:15:18.842+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.967352422Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.344267ms policy-pap | check.crcs = true grafana | logger=migrator t=2024-02-05T23:14:19.970130765Z level=info msg="Executing migration" id="Add column email to team table" policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce579082-502e-4e58-9380-d7baab3a6748","timestampMs":1707174918820,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | client.id = consumer-82113737-2238-440a-b31e-67419d0ce49a-3 policy-apex-pdp | [2024-02-05T23:15:18.845+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.975024771Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.893585ms policy-db-migrator | policy-pap | client.rack = policy-apex-pdp | [2024-02-05T23:15:19.011+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.980803808Z level=info msg="Executing migration" id="Add column external to team_member table" policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"687caf5b-4d92-42de-acdb-f82aab7cc43c","timestampMs":1707174918952,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.985617164Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.812396ms policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | [2024-02-05T23:15:19.023+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.989881016Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-db-migrator | -------------- policy-pap | enable.auto.commit = true policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"73abfafa-5e87-480f-971c-84c352b572be","timestampMs":1707174919022,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.994709416Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.827991ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-pap | exclude.internal.topics = true policy-apex-pdp | [2024-02-05T23:15:19.023+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.998350416Z level=info msg="Executing migration" id="create dashboard acl table" policy-db-migrator | -------------- policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | [2024-02-05T23:15:19.026+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:19.999357085Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.006409ms grafana | logger=migrator t=2024-02-05T23:14:20.006853361Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"687caf5b-4d92-42de-acdb-f82aab7cc43c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"97dae9b3-163a-482b-805c-f915fcf0db7a","timestampMs":1707174919025,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.00772592Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=872.538µs policy-pap | fetch.min.bytes = 1 policy-apex-pdp | [2024-02-05T23:15:19.039+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.010857192Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-pap | group.id = 82113737-2238-440a-b31e-67419d0ce49a policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"73abfafa-5e87-480f-971c-84c352b572be","timestampMs":1707174919022,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql grafana | logger=migrator t=2024-02-05T23:14:20.012518231Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.660428ms policy-pap | group.instance.id = null policy-apex-pdp | [2024-02-05T23:15:19.039+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.015947751Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-pap | heartbeat.interval.ms = 3000 policy-apex-pdp | [2024-02-05T23:15:19.042+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-05T23:14:20.017453804Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.505243ms policy-pap | interceptor.classes = [] kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"687caf5b-4d92-42de-acdb-f82aab7cc43c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"97dae9b3-163a-482b-805c-f915fcf0db7a","timestampMs":1707174919025,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.023995122Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-pap | internal.leave.group.on.close = true kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-02-05T23:15:19.042+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.024847746Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=852.624µs policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-02-05T23:15:19.079+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.027710668Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-pap | isolation.level = read_uncommitted kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"54fa79c2-9992-4631-b988-2a9cecf2df7f","timestampMs":1707174918953,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0450-pdpgroup.sql grafana | logger=migrator t=2024-02-05T23:14:20.029075389Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.364121ms policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-02-05T23:15:19.081+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.033198547Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"54fa79c2-9992-4631-b988-2a9cecf2df7f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d0f9af2d-ed17-4886-8d33-16b52a441775","timestampMs":1707174919081,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) grafana | logger=migrator t=2024-02-05T23:14:20.034605478Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.40594ms policy-pap | max.poll.interval.ms = 300000 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-02-05T23:15:19.088+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.042775888Z level=info msg="Executing migration" id="add index dashboard_permission" policy-pap | max.poll.records = 500 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"54fa79c2-9992-4631-b988-2a9cecf2df7f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d0f9af2d-ed17-4886-8d33-16b52a441775","timestampMs":1707174919081,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.044283241Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.498271ms policy-pap | metadata.max.age.ms = 300000 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-02-05T23:15:19.088+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.049021109Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-pap | metric.reporters = [] kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-02-05T23:15:19.115+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0460-pdppolicystatus.sql grafana | logger=migrator t=2024-02-05T23:14:20.049781212Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=762.433µs policy-pap | metrics.num.samples = 2 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39c1570c-dd05-4983-bede-a1e58213f1cf","timestampMs":1707174919093,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-02-05T23:14:20.055920809Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-02-05T23:15:19.117+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-05T23:14:20.056258007Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=337.407µs policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39c1570c-dd05-4983-bede-a1e58213f1cf","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"596c9de2-f9f1-49de-b6fb-fa0fd5c472b5","timestampMs":1707174919116,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.104703054Z level=info msg="Executing migration" id="create tag table" policy-apex-pdp | [2024-02-05T23:15:19.125+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.105420877Z level=info msg="Migration successfully executed" id="create tag table" duration=717.323µs policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39c1570c-dd05-4983-bede-a1e58213f1cf","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"596c9de2-f9f1-49de-b6fb-fa0fd5c472b5","timestampMs":1707174919116,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | receive.buffer.bytes = 65536 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-05T23:14:20.110398361Z level=info msg="Executing migration" id="add index tag.key_value" policy-apex-pdp | [2024-02-05T23:15:19.125+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.11136145Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=962.909µs policy-apex-pdp | [2024-02-05T23:15:56.161+00:00|INFO|RequestLog|qtp830863979-33] 172.17.0.2 - policyadmin [05/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.49.1" policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.118241226Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.119104282Z level=info msg="Migration successfully executed" id="create login attempt table" duration=862.666µs kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | > upgrade 0470-pdp.sql kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.122180703Z level=info msg="Executing migration" id="add index login_attempt.username" policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.124270078Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=2.092097ms policy-pap | sasl.jaas.config = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.129638451Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.130615453Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=975.082µs policy-pap | sasl.kerberos.service.name = null kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.135402643Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.157458643Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=22.05354ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0480-pdpstatistics.sql grafana | logger=migrator t=2024-02-05T23:14:20.162306796Z level=info msg="Executing migration" id="create login_attempt v2" policy-pap | sasl.login.callback.handler.class = null kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.16302513Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=720.115µs policy-pap | sasl.login.class = null kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.166129817Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.16706804Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=937.433µs policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.17809363Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.178440529Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=346.708µs policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.184085224Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-05T23:14:20.185110087Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.024373ms kafka | [2024-02-05 23:14:58,146] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.190206467Z level=info msg="Executing migration" id="create user auth table" policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:20.191396208Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.189162ms kafka | [2024-02-05 23:14:58,146] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.198110627Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-05 23:14:58,146] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-05T23:14:20.199112834Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.001818ms policy-db-migrator | kafka | [2024-02-05 23:14:58,146] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-05T23:14:20.203305639Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql kafka | [2024-02-05 23:14:58,146] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-05T23:14:20.203523339Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=218.07µs policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,156] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-05T23:14:20.212556124Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" kafka | [2024-02-05 23:14:58,162] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-02-05 23:14:58,163] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.218563722Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.006817ms policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-05 23:14:58,259] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:20.222982478Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-05 23:14:58,274] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:20.231310514Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.324755ms policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-05 23:14:58,276] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:20.23739955Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-db-migrator | policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-05 23:14:58,276] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:20.24262947Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.230361ms policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-05 23:14:58,278] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(u46pnWTBR6-v7DJLPWifgQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.245694428Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.250729204Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.034505ms policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.255929048Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-db-migrator | -------------- policy-pap | security.providers = null kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.256639819Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=710.982µs policy-db-migrator | policy-pap | send.buffer.bytes = 131072 kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-02-05T23:14:20.261540525Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:20.266727696Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.18634ms kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:20.272129176Z level=info msg="Executing migration" id="create server_lock table" kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.272936059Z level=info msg="Migration successfully executed" id="create server_lock table" duration=806.643µs policy-db-migrator | kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.278899446Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-pap | ssl.cipher.suites = null kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.28080177Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.902464ms policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.288747567Z level=info msg="Executing migration" id="create user auth token table" policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-05T23:14:20.289693113Z level=info msg="Migration successfully executed" id="create user auth token table" duration=948.046µs kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-05T23:14:20.293315857Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.key.password = null kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.294526564Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.210466ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.30510269Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.306544209Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.441198ms policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.310762939Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.312580103Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.817114ms policy-db-migrator | policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-02-05T23:14:20.318402448Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-pap | ssl.keystore.password = null kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql grafana | logger=migrator t=2024-02-05T23:14:20.323824942Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.421604ms policy-pap | ssl.keystore.type = JKS kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.371115447Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) grafana | logger=migrator t=2024-02-05T23:14:20.374016847Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=2.895849ms policy-pap | ssl.provider = null kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.379861948Z level=info msg="Executing migration" id="create cache_data table" policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.380607217Z level=info msg="Migration successfully executed" id="create cache_data table" duration=745.15µs policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-05T23:14:20.387984657Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-db-migrator | policy-pap | ssl.truststore.password = null kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql grafana | logger=migrator t=2024-02-05T23:14:20.388904105Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=911.257µs policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.394703585Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-05T23:14:20.396093202Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.387846ms kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.401567438Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-pap | [2024-02-05T23:14:57.508+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.403601251Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.033092ms kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql grafana | logger=migrator t=2024-02-05T23:14:20.408946017Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-02-05T23:14:57.508+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.409023295Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=77.478µs kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-02-05T23:14:57.508+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174897508 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) grafana | logger=migrator t=2024-02-05T23:14:20.415685322Z level=info msg="Executing migration" id="delete alert_definition table" policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Subscribed to topic(s): policy-pdp-pap kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.41584986Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=153.035µs policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.421823219Z level=info msg="Executing migration" id="recreate alert_definition table" policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cd7670e3-0a24-44c8-9ed2-b9e3c70e4f45, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f190cfe kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.423458691Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.639143ms policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cd7670e3-0a24-44c8-9ed2-b9e3c70e4f45, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-02-05T23:14:20.430916428Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.433541726Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.612574ms policy-pap | allow.auto.create.topics = true kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-02-05T23:14:20.43896021Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.440047877Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.086877ms policy-pap | auto.include.jmx.reporter = true kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.445634378Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-pap | auto.offset.reset = latest kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:20.445710196Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=77.467µs policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.450965862Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | check.crcs = true policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-02-05T23:14:20.452010451Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.044138ms policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-02-05T23:14:20.457053308Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | client.id = consumer-policy-pap-4 grafana | logger=migrator t=2024-02-05T23:14:20.458344862Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.291023ms kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | client.rack = grafana | logger=migrator t=2024-02-05T23:14:20.467001262Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-05T23:14:20.468053671Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.060121ms kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-02-05T23:14:20.472851274Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-02-05T23:14:20.473573278Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=721.654µs kafka | [2024-02-05 23:14:58,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-02-05T23:14:20.481559306Z level=info msg="Executing migration" id="Add column paused in alert_definition" kafka | [2024-02-05 23:14:58,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-02-05T23:14:20.485738938Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.180613ms kafka | [2024-02-05 23:14:58,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-05T23:14:20.490531148Z level=info msg="Executing migration" id="drop alert_definition table" kafka | [2024-02-05 23:14:58,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-02-05T23:14:20.491400166Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=868.309µs kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-02-05T23:14:20.495925266Z level=info msg="Executing migration" id="delete alert_definition_version table" kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | policy-pap | group.instance.id = null grafana | logger=migrator t=2024-02-05T23:14:20.496096545Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=171.799µs kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-02-05T23:14:20.500170082Z level=info msg="Executing migration" id="recreate alert_definition_version table" kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | -------------- policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-05T23:14:20.501471098Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.300935ms kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-02-05T23:14:20.561199204Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-05T23:14:20.563134864Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.93911ms policy-db-migrator | policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-02-05T23:14:20.567642901Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-05T23:14:20.568589276Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=945.985µs kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-05T23:14:20.5729094Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-05T23:14:20.572977415Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=68.916µs kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-02-05T23:14:20.577684977Z level=info msg="Executing migration" id="drop alert_definition_version table" kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-05T23:14:20.579197711Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.522236ms kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-05T23:14:20.586181281Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-05T23:14:20.587140949Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=959.308µs kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-05T23:14:20.590914848Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:20.592356636Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.440878ms kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-02-05T23:14:20.599682973Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-05T23:14:20.60067591Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=992.797µs kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-05T23:14:20.604651285Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-05T23:14:20.609269106Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.618142ms kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-pap | request.timeout.ms = 30000 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.613366148Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.614110348Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=743.43µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-pap | sasl.client.callback.handler.class = null kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.617877946Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.618603641Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=725.465µs kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-05T23:14:20.620989894Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.656188747Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=35.194821ms policy-pap | sasl.kerberos.service.name = null policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.660184295Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.693363488Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=33.172581ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.696856763Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:20.697643121Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=783.348µs policy-db-migrator | kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-02-05T23:14:20.701651514Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-db-migrator | kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-02-05T23:14:20.703316183Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.668089ms policy-db-migrator | > upgrade 0630-toscanodetype.sql kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-05T23:14:20.706425941Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-05T23:14:20.713676071Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.25106ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-05T23:14:20.717635963Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,286] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-02-05T23:14:20.723182246Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.545952ms policy-db-migrator | kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-05T23:14:20.727538037Z level=info msg="Executing migration" id="create alert_rule table" policy-db-migrator | kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:20.728386229Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=847.963µs policy-db-migrator | > upgrade 0640-toscanodetypes.sql kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-05T23:14:20.73300301Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-05T23:14:20.73409969Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.09908ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-05T23:14:20.738214647Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-05T23:14:20.739944811Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.729474ms policy-db-migrator | kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-05T23:14:20.743916585Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-db-migrator | kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-05T23:14:20.745795883Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.875478ms policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.748880015Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.748975427Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=95.801µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.752119502Z level=info msg="Executing migration" id="add column for to alert_rule" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.759923729Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.802256ms policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.764464131Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-db-migrator | policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.768711119Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.246737ms policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.784913567Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-db-migrator | -------------- policy-pap | security.providers = null kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.795285668Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=10.364068ms kafka | [2024-02-05 23:14:58,287] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-05T23:14:20.799237557Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-02-05 23:14:58,287] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-02-05T23:14:20.799915181Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=677.274µs policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:20.802832235Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-db-migrator | kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:20.803549958Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=717.473µs policy-db-migrator | kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-02-05T23:14:20.807955302Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-db-migrator | > upgrade 0670-toscapolicies.sql kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-05T23:14:20.816228935Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.274634ms policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-05T23:14:20.819682941Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-05T23:14:20.826873878Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.185066ms policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-05T23:14:20.830894743Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-db-migrator | kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-05T23:14:20.83198391Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.088988ms policy-db-migrator | kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-05T23:14:20.836332561Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-02-05T23:14:20.842661951Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.328861ms policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-02-05T23:14:20.848286972Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-02-05T23:14:20.854894677Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.607255ms policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-05T23:14:20.860854553Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-05T23:14:20.860920718Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=67.005µs policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-02-05T23:14:20.863942146Z level=info msg="Executing migration" id="create alert_rule_version table" policy-db-migrator | > upgrade 0690-toscapolicy.sql kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-05T23:14:20.864882779Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=937.453µs policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-02-05T23:14:20.869168395Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-02-05T23:14:20.870166842Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=998.077µs policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-05T23:14:20.874032172Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-02-05T23:14:20.875113188Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.080687ms policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-05T23:14:20.879123621Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-05T23:14:20.879184565Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=61.474µs policy-db-migrator | > upgrade 0700-toscapolicytype.sql kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | grafana | logger=migrator t=2024-02-05T23:14:20.884554407Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-02-05T23:14:20.890682762Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.128185ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.894679913Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174897514 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:20.900737201Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.062629ms policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-02-05T23:14:20.90376542Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|ServiceManager|main] Policy PAP starting topics grafana | logger=migrator t=2024-02-05T23:14:20.909809936Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.042045ms policy-db-migrator | > upgrade 0710-toscapolicytypes.sql kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cd7670e3-0a24-44c8-9ed2-b9e3c70e4f45, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-02-05T23:14:20.916126974Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-05T23:14:57.515+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=82113737-2238-440a-b31e-67419d0ce49a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-02-05T23:14:20.922152386Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.025163ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-05T23:14:57.515+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8340479e-6066-4c0d-8cea-5ee1d125717a, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-02-05T23:14:20.9260363Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-05T23:14:57.540+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-02-05T23:14:20.932050428Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.013959ms policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-pap | acks = -1 grafana | logger=migrator t=2024-02-05T23:14:20.937178986Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-02-05T23:14:20.937242Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=63.755µs policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | batch.size = 16384 grafana | logger=migrator t=2024-02-05T23:14:20.94264359Z level=info msg="Executing migration" id=create_alert_configuration_table policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-02-05T23:14:20.943323644Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=679.715µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | buffer.memory = 33554432 grafana | logger=migrator t=2024-02-05T23:14:20.948415114Z level=info msg="Executing migration" id="Add column default in alert_configuration" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-02-05T23:14:20.954530155Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.112361ms policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.id = producer-1 grafana | logger=migrator t=2024-02-05T23:14:20.958326109Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-pap | compression.type = none grafana | logger=migrator t=2024-02-05T23:14:20.958388313Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=62.874µs policy-db-migrator | > upgrade 0730-toscaproperty.sql kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-05T23:14:20.964716084Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-02-05T23:14:20.970588091Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.873806ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-02-05T23:14:20.976231575Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-05T23:14:20.977009012Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=777.186µs policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-05T23:14:21.015717227Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-02-05T23:14:21.022014207Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.29672ms policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-02-05T23:14:21.025692922Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-02-05T23:14:21.026397493Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=704.311µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) kafka | [2024-02-05 23:14:58,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-02-05T23:14:21.032696323Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-05T23:14:21.033660022Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=963.428µs policy-db-migrator | kafka | [2024-02-05 23:14:58,290] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-02-05T23:14:21.038050079Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-db-migrator | kafka | [2024-02-05 23:14:58,294] INFO [Broker id=1] Finished LeaderAndIsr request in 178ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-05T23:14:21.046356595Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.312998ms policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql kafka | [2024-02-05 23:14:58,299] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=u46pnWTBR6-v7DJLPWifgQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-05T23:14:21.050075359Z level=info msg="Executing migration" id="create provenance_type table" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-05T23:14:21.050600869Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=525.389µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) kafka | [2024-02-05 23:14:58,307] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:21.055172797Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,308] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-02-05T23:14:21.056138176Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=964.989µs policy-db-migrator | kafka | [2024-02-05 23:14:58,312] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-02-05T23:14:21.062328112Z level=info msg="Executing migration" id="create alert_image table" policy-db-migrator | kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-02-05T23:14:21.06311286Z level=info msg="Migration successfully executed" id="create alert_image table" duration=784.328µs policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-02-05T23:14:21.071563139Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-02-05T23:14:21.072660448Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.10182ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.076486247Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.07653989Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=49.661µs policy-db-migrator | policy-pap | request.timeout.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.080600301Z level=info msg="Executing migration" id=create_alert_configuration_history_table kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | retries = 2147483647 policy-db-migrator | > upgrade 0770-toscarequirement.sql grafana | logger=migrator t=2024-02-05T23:14:21.081260231Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=661.25µs kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.084831082Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) grafana | logger=migrator t=2024-02-05T23:14:21.085671303Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=844.071µs kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.089513755Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.089974449Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.094342282Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | > upgrade 0780-toscarequirements.sql grafana | logger=migrator t=2024-02-05T23:14:21.094652362Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=310.03µs kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.0974172Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) grafana | logger=migrator t=2024-02-05T23:14:21.098185344Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=767.644µs kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.101055486Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.105951498Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.895572ms kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.110340194Z level=info msg="Executing migration" id="create library_element table v1" kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql grafana | logger=migrator t=2024-02-05T23:14:21.111004195Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=663.741µs kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.113905274Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-05T23:14:21.114890117Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=986.103µs kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.117813361Z level=info msg="Executing migration" id="create library_element_connection table v1" kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.118424471Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=612.03µs kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.122719375Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql grafana | logger=migrator t=2024-02-05T23:14:21.123438298Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=718.453µs policy-pap | sasl.mechanism = GSSAPI kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.126466566Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) grafana | logger=migrator t=2024-02-05T23:14:21.127179548Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=712.633µs policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.130007891Z level=info msg="Executing migration" id="increase max description length to 2048" policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.130026485Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=19.265µs policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.134218576Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql grafana | logger=migrator t=2024-02-05T23:14:21.134268398Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=50.171µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.137073074Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-05T23:14:21.137511154Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=438.129µs policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.141301895Z level=info msg="Executing migration" id="create data_keys table" policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.143094331Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.793567ms policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.146324446Z level=info msg="Executing migration" id="create secrets table" policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.147147473Z level=info msg="Migration successfully executed" id="create secrets table" duration=822.506µs policy-pap | security.providers = null policy-db-migrator | > upgrade 0820-toscatrigger.sql kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.151933249Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-pap | send.buffer.bytes = 131072 policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.201697208Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=49.763599ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.205908915Z level=info msg="Executing migration" id="add name column into data_keys" policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.211651469Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.737212ms policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.214731048Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.214839473Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=108.464µs policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.221686057Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | ssl.engine.factory.class = null kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.274116952Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=52.425754ms policy-db-migrator | -------------- policy-pap | ssl.key.password = null kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.28899372Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.334953697Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=45.960516ms policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.342261196Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.343738371Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.480047ms policy-db-migrator | policy-pap | ssl.keystore.location = null kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.35363974Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | ssl.keystore.password = null kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.354836721Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.197032ms kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-05T23:14:21.358324833Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-05T23:14:21.358612658Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=287.405µs kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | policy-pap | ssl.provider = null grafana | logger=migrator t=2024-02-05T23:14:21.364049843Z level=info msg="Executing migration" id="create permission table" kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-05T23:14:21.364973124Z level=info msg="Migration successfully executed" id="create permission table" duration=922.66µs kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.374756984Z level=info msg="Executing migration" id="add unique index permission.role_id" policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.375777866Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.024812ms policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-pap | ssl.truststore.location = null kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.380653103Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.381794133Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.140749ms policy-db-migrator | policy-pap | ssl.truststore.type = JKS kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.386225189Z level=info msg="Executing migration" id="create role table" policy-db-migrator | policy-pap | transaction.timeout.ms = 60000 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.387079752Z level=info msg="Migration successfully executed" id="create role table" duration=858.954µs policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | transactional.id = null kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.389993685Z level=info msg="Executing migration" id="add column display_name" policy-db-migrator | -------------- policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.398720696Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.723191ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.402337527Z level=info msg="Executing migration" id="add column group_name" policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:57.552+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.408004864Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.663736ms policy-db-migrator | policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.417031783Z level=info msg="Executing migration" id="add index role.org_id" policy-db-migrator | policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.419082889Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=2.054367ms policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174897569 policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.423230071Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8340479e-6066-4c0d-8cea-5ee1d125717a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.424262126Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.031985ms policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5cac03a0-b751-4443-b270-6b6ceb5efee2, alive=false, publisher=null]]: starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.427442138Z level=info msg="Executing migration" id="add index role_org_id_uid" kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-02-05T23:14:57.570+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.428483454Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.041126ms kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | acks = -1 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.506971266Z level=info msg="Executing migration" id="create team role table" kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-02-05T23:14:21.508346839Z level=info msg="Migration successfully executed" id="create team role table" duration=1.375753ms kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | batch.size = 16384 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.511987875Z level=info msg="Executing migration" id="add index team_role.org_id" kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) grafana | logger=migrator t=2024-02-05T23:14:21.514230004Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.241449ms kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.517627195Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.518806012Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.182318ms policy-pap | client.id = producer-2 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.524754074Z level=info msg="Executing migration" id="add index team_role.team_id" policy-pap | compression.type = none kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-02-05T23:14:21.525746259Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=995.826µs policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.530303324Z level=info msg="Executing migration" id="create user role table" policy-pap | delivery.timeout.ms = 120000 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) grafana | logger=migrator t=2024-02-05T23:14:21.532055411Z level=info msg="Migration successfully executed" id="create user role table" duration=1.754838ms policy-pap | enable.idempotence = true kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.539599815Z level=info msg="Executing migration" id="add index user_role.org_id" policy-pap | interceptor.classes = [] kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.540986429Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.387364ms policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.544644451Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-pap | linger.ms = 0 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-02-05T23:14:21.546172567Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.527536ms policy-pap | max.block.ms = 60000 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.550072693Z level=info msg="Executing migration" id="add index user_role.user_id" policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-02-05T23:14:21.551330808Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.257205ms policy-pap | max.request.size = 1048576 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.559250236Z level=info msg="Executing migration" id="create builtin role table" policy-pap | metadata.max.age.ms = 300000 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.560349396Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.10073ms kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.563731614Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-02-05T23:14:21.564820521Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.088917ms policy-pap | metric.reporters = [] kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.573768443Z level=info msg="Executing migration" id="add index builtin_role.name" policy-pap | metrics.num.samples = 2 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) grafana | logger=migrator t=2024-02-05T23:14:21.575147377Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.383014ms policy-pap | metrics.recording.level = INFO kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.579880711Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.587550193Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.669222ms policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.590716231Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-02-05T23:14:21.591734593Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.01256ms policy-pap | partitioner.class = null kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.597105572Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-pap | partitioner.ignore.keys = false kafka | [2024-02-05 23:14:58,341] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-02-05T23:14:21.598172374Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.066693ms policy-pap | receive.buffer.bytes = 32768 kafka | [2024-02-05 23:14:58,341] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.601385774Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-02-05 23:14:58,342] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.602715385Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.329392ms policy-pap | reconnect.backoff.ms = 50 kafka | [2024-02-05 23:14:58,346] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.606148436Z level=info msg="Executing migration" id="add unique index role.uid" policy-pap | request.timeout.ms = 30000 kafka | [2024-02-05 23:14:58,347] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-02-05T23:14:21.607282793Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.137608ms policy-pap | retries = 2147483647 kafka | [2024-02-05 23:14:58,347] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.61237533Z level=info msg="Executing migration" id="create seed assignment table" policy-pap | retry.backoff.ms = 100 kafka | [2024-02-05 23:14:58,348] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-02-05T23:14:21.613069757Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=694.257µs policy-pap | sasl.client.callback.handler.class = null kafka | [2024-02-05 23:14:58,348] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.61603651Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-pap | sasl.jaas.config = null kafka | [2024-02-05 23:14:58,356] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.61713466Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.09786ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-02-05 23:14:58,356] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:21.620400792Z level=info msg="Executing migration" id="add column hidden to role table" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | [2024-02-05 23:14:58,357] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.628403629Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.002467ms policy-pap | sasl.kerberos.service.name = null policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-02-05 23:14:58,357] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-05T23:14:21.634152555Z level=info msg="Executing migration" id="permission kind migration" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,357] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-02-05T23:14:21.641073856Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.921621ms policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) kafka | [2024-02-05 23:14:58,369] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-02-05T23:14:21.643873351Z level=info msg="Executing migration" id="permission attribute migration" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,371] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-02-05T23:14:21.650575944Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.701802ms policy-db-migrator | kafka | [2024-02-05 23:14:58,371] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-02-05T23:14:21.653981077Z level=info msg="Executing migration" id="permission identifier migration" policy-db-migrator | kafka | [2024-02-05 23:14:58,371] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-05T23:14:21.662908624Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.926547ms policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-02-05 23:14:58,371] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-05T23:14:21.667780751Z level=info msg="Executing migration" id="add permission identifier index" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,382] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-05T23:14:21.668519398Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=738.468µs policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-05 23:14:58,383] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-02-05T23:14:21.671596126Z level=info msg="Executing migration" id="create query_history table v1" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,383] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-05T23:14:21.672600534Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.002327ms policy-db-migrator | kafka | [2024-02-05 23:14:58,383] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:21.678345758Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-db-migrator | kafka | [2024-02-05 23:14:58,383] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-05T23:14:21.680267865Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.926168ms policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-02-05 23:14:58,392] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-05T23:14:21.683451598Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,392] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-05T23:14:21.683516673Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=65.855µs policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-05 23:14:58,392] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-05T23:14:21.694648541Z level=info msg="Executing migration" id="rbac disabled migrator" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,393] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-05T23:14:21.694740261Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=90.05µs policy-db-migrator | kafka | [2024-02-05 23:14:58,393] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-05T23:14:21.700395565Z level=info msg="Executing migration" id="teams permissions migration" policy-db-migrator | kafka | [2024-02-05 23:14:58,400] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:21.700782563Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=386.818µs policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql kafka | [2024-02-05 23:14:58,400] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-05T23:14:21.70305783Z level=info msg="Executing migration" id="dashboard permissions" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,400] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-05T23:14:21.703531207Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=473.888µs policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-05 23:14:58,400] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-05T23:14:21.706150262Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,400] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-05T23:14:21.70666345Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=513.057µs policy-db-migrator | kafka | [2024-02-05 23:14:58,409] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-05T23:14:21.709716003Z level=info msg="Executing migration" id="drop managed folder create actions" policy-db-migrator | kafka | [2024-02-05 23:14:58,410] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-05T23:14:21.710010539Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=288.456µs policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-02-05 23:14:58,410] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-pap | security.providers = null grafana | logger=migrator t=2024-02-05T23:14:21.715266003Z level=info msg="Executing migration" id="alerting notification permissions" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,410] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-05T23:14:21.715893575Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=622.881µs policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-05 23:14:58,410] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-05T23:14:21.719156946Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,417] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-05T23:14:21.720540899Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.383314ms policy-db-migrator | kafka | [2024-02-05 23:14:58,417] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-02-05T23:14:21.724070872Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-db-migrator | kafka | [2024-02-05 23:14:58,417] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-05T23:14:21.72520738Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.136218ms policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-02-05 23:14:58,417] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-05T23:14:21.72908284Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,417] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-05T23:14:21.73744922Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.365959ms policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-05T23:14:21.742600118Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-05 23:14:58,423] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-05T23:14:21.742713144Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=113.286µs policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,424] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-05T23:14:21.745647971Z level=info msg="Executing migration" id="create correlation table v1" policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | [2024-02-05 23:14:58,424] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.746637116Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=990.755µs policy-db-migrator | policy-pap | ssl.keystore.location = null kafka | [2024-02-05 23:14:58,425] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.749812917Z level=info msg="Executing migration" id="add index correlations.uid" policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | ssl.keystore.password = null kafka | [2024-02-05 23:14:58,425] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.751051658Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.238521ms policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS kafka | [2024-02-05 23:14:58,435] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:21.758721479Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-05 23:14:58,436] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:21.760339297Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.617257ms policy-db-migrator | -------------- policy-pap | ssl.provider = null kafka | [2024-02-05 23:14:58,436] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.763252038Z level=info msg="Executing migration" id="add correlation config column" policy-db-migrator | policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-05 23:14:58,436] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.774872577Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.621448ms policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-05 23:14:58,436] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.778167495Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | ssl.truststore.certificates = null kafka | [2024-02-05 23:14:58,442] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:21.779738112Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.568947ms policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null kafka | [2024-02-05 23:14:58,443] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:21.784065524Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.truststore.password = null kafka | [2024-02-05 23:14:58,443] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.785262647Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.196842ms policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS kafka | [2024-02-05 23:14:58,443] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.789306054Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-db-migrator | policy-pap | transaction.timeout.ms = 60000 kafka | [2024-02-05 23:14:58,443] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.820896527Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=31.589082ms policy-db-migrator | policy-pap | transactional.id = null kafka | [2024-02-05 23:14:58,451] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:21.823735412Z level=info msg="Executing migration" id="create correlation v2" policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-05 23:14:58,451] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:21.824470899Z level=info msg="Migration successfully executed" id="create correlation v2" duration=732.997µs policy-db-migrator | -------------- policy-pap | kafka | [2024-02-05 23:14:58,451] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.829552913Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-02-05T23:14:57.570+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-02-05 23:14:58,451] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.830760297Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.207045ms policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-02-05 23:14:58,452] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.835718003Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-02-05 23:14:58,462] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.837706284Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.990481ms policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174897573 kafka | [2024-02-05 23:14:58,463] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-02-05T23:14:21.843687472Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2024-02-05T23:14:21.844867931Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.180309ms policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5cac03a0-b751-4443-b270-6b6ceb5efee2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-02-05 23:14:58,463] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator kafka | [2024-02-05 23:14:58,463] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-05T23:14:21.848128931Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher kafka | [2024-02-05 23:14:58,463] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.848577103Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=447.782µs policy-pap | [2024-02-05T23:14:57.576+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher kafka | [2024-02-05 23:14:58,470] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-02-05T23:14:57.577+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers kafka | [2024-02-05 23:14:58,470] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:21.852017243Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | policy-pap | [2024-02-05T23:14:57.585+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers kafka | [2024-02-05 23:14:58,470] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.853282871Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.265118ms policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-pap | [2024-02-05T23:14:57.585+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock kafka | [2024-02-05 23:14:58,470] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.858809216Z level=info msg="Executing migration" id="add provisioning column" policy-pap | [2024-02-05T23:14:57.585+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests kafka | [2024-02-05 23:14:58,470] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.868379199Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.569372ms policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:57.586+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer kafka | [2024-02-05 23:14:58,477] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:21.871506779Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-02-05T23:14:57.586+00:00|INFO|TimerManager|Thread-9] timer manager update started kafka | [2024-02-05 23:14:58,478] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:21.872311452Z level=info msg="Migration successfully executed" id="create entity_events table" duration=804.454µs policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:57.586+00:00|INFO|TimerManager|Thread-10] timer manager state-change started kafka | [2024-02-05 23:14:58,478] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.875667744Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | policy-pap | [2024-02-05T23:14:57.587+00:00|INFO|ServiceManager|main] Policy PAP started kafka | [2024-02-05 23:14:58,478] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:21.876856254Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.187989ms policy-db-migrator | policy-pap | [2024-02-05T23:14:57.587+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.666 seconds (process running for 11.332) kafka | [2024-02-05 23:14:58,478] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:21.883585032Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-pap | [2024-02-05T23:14:58.030+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-02-05T23:14:21.884155971Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-02-05 23:14:58,485] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:14:58.031+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-02-05T23:14:21.888602191Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-05 23:14:58,485] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:14:58.031+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.889157648Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-05 23:14:58,485] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:14:58.031+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-05T23:14:21.892676776Z level=info msg="Executing migration" id="Drop old dashboard public config table" kafka | [2024-02-05 23:14:58,485] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:14:58.068+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.893504164Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=827.157µs kafka | [2024-02-05 23:14:58,486] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:14:58.068+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.898832064Z level=info msg="Executing migration" id="recreate dashboard public config v1" kafka | [2024-02-05 23:14:58,493] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:14:58.074+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:21.899797122Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=964.859µs kafka | [2024-02-05 23:14:58,494] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:14:58.075+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ policy-db-migrator | > upgrade 0100-pdp.sql grafana | logger=migrator t=2024-02-05T23:14:21.903407513Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" kafka | [2024-02-05 23:14:58,494] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:14:58.142+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.904923377Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.513583ms kafka | [2024-02-05 23:14:58,494] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:14:58.221+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY grafana | logger=migrator t=2024-02-05T23:14:21.910327484Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-05 23:14:58,494] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:14:58.252+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:21.911612496Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.284683ms kafka | [2024-02-05 23:14:58,499] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-02-05T23:14:58.867+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=migrator t=2024-02-05T23:14:21.949263605Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-02-05 23:14:58,500] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-02-05T23:14:58.875+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] (Re-)joining group grafana | logger=migrator t=2024-02-05T23:14:21.951758262Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.494287ms kafka | [2024-02-05 23:14:58,500] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | [2024-02-05T23:14:58.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Request joining group due to: need to re-join with the given member-id: consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a grafana | logger=migrator t=2024-02-05T23:14:21.955808971Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-02-05 23:14:58,500] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:58.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) grafana | logger=migrator t=2024-02-05T23:14:21.95787645Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.060608ms kafka | [2024-02-05 23:14:58,500] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | [2024-02-05T23:14:58.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] (Re-)joining group grafana | logger=migrator t=2024-02-05T23:14:21.963963352Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-02-05 23:14:58,511] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:58.952+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=migrator t=2024-02-05T23:14:21.965060822Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.095309ms kafka | [2024-02-05 23:14:58,512] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-02-05T23:14:58.954+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group grafana | logger=migrator t=2024-02-05T23:14:21.968376215Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-02-05 23:14:58,512] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-02-05T23:14:58.957+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7 grafana | logger=migrator t=2024-02-05T23:14:21.969440616Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.063991ms kafka | [2024-02-05 23:14:58,512] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | [2024-02-05T23:14:58.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) grafana | logger=migrator t=2024-02-05T23:14:21.9740798Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-02-05 23:14:58,513] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:14:58.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group grafana | logger=migrator t=2024-02-05T23:14:21.977155828Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=3.074177ms kafka | [2024-02-05 23:14:58,519] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | [2024-02-05T23:15:01.925+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a', protocol='range'} grafana | logger=migrator t=2024-02-05T23:14:21.980832153Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-02-05 23:14:58,520] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:01.935+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Finished assignment for group at generation 1: {consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a=Assignment(partitions=[policy-pdp-pap-0])} grafana | logger=migrator t=2024-02-05T23:14:21.982390577Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.552633ms kafka | [2024-02-05 23:14:58,520] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-02-05T23:15:01.957+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a', protocol='range'} grafana | logger=migrator t=2024-02-05T23:14:21.986477995Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-02-05 23:14:58,520] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-02-05T23:15:01.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=migrator t=2024-02-05T23:14:21.987622595Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.14438ms kafka | [2024-02-05 23:14:58,520] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-pap | [2024-02-05T23:15:01.961+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7', protocol='range'} grafana | logger=migrator t=2024-02-05T23:14:21.991951148Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" kafka | [2024-02-05 23:14:58,535] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:01.962+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7=Assignment(partitions=[policy-pdp-pap-0])} grafana | logger=migrator t=2024-02-05T23:14:22.024102558Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=32.151571ms kafka | [2024-02-05 23:14:58,536] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-pap | [2024-02-05T23:15:01.965+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=migrator t=2024-02-05T23:14:22.027552024Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-02-05 23:14:58,536] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:01.967+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7', protocol='range'} grafana | logger=migrator t=2024-02-05T23:14:22.034514051Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.97296ms kafka | [2024-02-05 23:14:58,536] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-02-05T23:15:01.968+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=migrator t=2024-02-05T23:14:22.037570269Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-02-05 23:14:58,536] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-02-05T23:15:01.968+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=migrator t=2024-02-05T23:14:22.04370617Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.13563ms kafka | [2024-02-05 23:14:58,543] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-pap | [2024-02-05T23:15:01.987+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-02-05T23:14:22.047936161Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-02-05 23:14:58,544] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:01.987+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-02-05T23:14:22.048152539Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=216.159µs kafka | [2024-02-05 23:14:58,544] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:02.006+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-02-05T23:14:22.050745743Z level=info msg="Executing migration" id="add share column" kafka | [2024-02-05 23:14:58,544] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-pap | [2024-02-05T23:15:02.006+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-02-05T23:14:22.056709845Z level=info msg="Migration successfully executed" id="add share column" duration=5.959971ms kafka | [2024-02-05 23:14:58,544] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:05.122+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' kafka | [2024-02-05 23:14:58,551] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:22.060068501Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | policy-pap | [2024-02-05T23:15:05.122+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' kafka | [2024-02-05 23:14:58,551] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:22.06028624Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=217.409µs policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:05.129+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 7 ms kafka | [2024-02-05 23:14:58,551] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.062992579Z level=info msg="Executing migration" id="create file table" policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-pap | [2024-02-05T23:15:18.855+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-02-05T23:14:22.063716342Z level=info msg="Migration successfully executed" id="create file table" duration=723.653µs kafka | [2024-02-05 23:14:58,551] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [] grafana | logger=migrator t=2024-02-05T23:14:22.068875803Z level=info msg="Executing migration" id="file table idx: path natural pk" kafka | [2024-02-05 23:14:58,551] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-02-05T23:15:18.855+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-02-05T23:14:22.071831018Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.960236ms kafka | [2024-02-05 23:14:58,559] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce579082-502e-4e58-9380-d7baab3a6748","timestampMs":1707174918820,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-02-05T23:14:22.076170694Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" kafka | [2024-02-05 23:14:58,559] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-pap | [2024-02-05T23:15:18.856+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-02-05T23:14:22.077405812Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.234858ms kafka | [2024-02-05 23:14:58,559] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.080759237Z level=info msg="Executing migration" id="create file_meta table" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce579082-502e-4e58-9380-d7baab3a6748","timestampMs":1707174918820,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} kafka | [2024-02-05 23:14:58,559] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL grafana | logger=migrator t=2024-02-05T23:14:22.081595945Z level=info msg="Migration successfully executed" id="create file_meta table" duration=836.118µs policy-pap | [2024-02-05T23:15:18.863+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-02-05 23:14:58,559] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.08495008Z level=info msg="Executing migration" id="file table idx: path key" policy-pap | [2024-02-05T23:15:18.971+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting kafka | [2024-02-05 23:14:58,566] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.086227097Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.270295ms policy-pap | [2024-02-05T23:15:18.971+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting listener kafka | [2024-02-05 23:14:58,566] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.09174928Z level=info msg="Executing migration" id="set path collation in file table" policy-pap | [2024-02-05T23:15:18.971+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting timer kafka | [2024-02-05 23:14:58,567] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-05T23:14:22.091891692Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=142.412µs policy-pap | [2024-02-05T23:15:18.972+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=687caf5b-4d92-42de-acdb-f82aab7cc43c, expireMs=1707174948972] kafka | [2024-02-05 23:14:58,567] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.095807453Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-pap | [2024-02-05T23:15:18.974+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting enqueue kafka | [2024-02-05 23:14:58,567] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:22.095873947Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=66.945µs policy-pap | [2024-02-05T23:15:18.974+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=687caf5b-4d92-42de-acdb-f82aab7cc43c, expireMs=1707174948972] policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-02-05 23:14:58,577] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:22.099461826Z level=info msg="Executing migration" id="managed permissions migration" policy-pap | [2024-02-05T23:15:18.974+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate started policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,577] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:22.100361038Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=898.492µs policy-pap | [2024-02-05T23:15:18.976+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-02-05 23:14:58,577] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.105117178Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"687caf5b-4d92-42de-acdb-f82aab7cc43c","timestampMs":1707174918952,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-02-05 23:14:58,577] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.105519628Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=400.02µs policy-pap | [2024-02-05T23:15:19.013+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql kafka | [2024-02-05 23:14:58,577] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:22.109309751Z level=info msg="Executing migration" id="RBAC action name migrator" policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"687caf5b-4d92-42de-acdb-f82aab7cc43c","timestampMs":1707174918952,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,584] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:22.110142069Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=835.888µs policy-pap | [2024-02-05T23:15:19.013+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | UPDATE jpapdpstatistics_enginestats a kafka | [2024-02-05 23:14:58,584] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:22.113400122Z level=info msg="Executing migration" id="Add UID column to playlist" policy-pap | [2024-02-05T23:15:19.022+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | JOIN pdpstatistics b kafka | [2024-02-05 23:14:58,584] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.123834538Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.430086ms policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"687caf5b-4d92-42de-acdb-f82aab7cc43c","timestampMs":1707174918952,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp kafka | [2024-02-05 23:14:58,584] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.127380147Z level=info msg="Executing migration" id="Update uid column values in playlist" policy-pap | [2024-02-05T23:15:19.022+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | SET a.id = b.id kafka | [2024-02-05 23:14:58,585] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:22.127608318Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=228.671µs policy-pap | [2024-02-05T23:15:19.037+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,594] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:22.130832603Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"73abfafa-5e87-480f-971c-84c352b572be","timestampMs":1707174919022,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | [2024-02-05 23:14:58,594] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:22.132368528Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.535225ms policy-pap | [2024-02-05T23:15:19.037+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | kafka | [2024-02-05 23:14:58,594] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.136739761Z level=info msg="Executing migration" id="update group index for alert rules" policy-pap | [2024-02-05T23:15:19.037+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql kafka | [2024-02-05 23:14:58,594] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.137345327Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=605.706µs policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"73abfafa-5e87-480f-971c-84c352b572be","timestampMs":1707174919022,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,594] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:22.140888974Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-pap | [2024-02-05T23:15:19.040+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp kafka | [2024-02-05 23:14:58,600] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:22.141232402Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=343.068µs policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"687caf5b-4d92-42de-acdb-f82aab7cc43c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"97dae9b3-163a-482b-805c-f915fcf0db7a","timestampMs":1707174919025,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,601] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:22.145392137Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-pap | [2024-02-05T23:15:19.061+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping policy-db-migrator | kafka | [2024-02-05 23:14:58,601] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.14598287Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=590.673µs policy-pap | [2024-02-05T23:15:19.063+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping enqueue policy-db-migrator | kafka | [2024-02-05 23:14:58,601] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.149444599Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql kafka | [2024-02-05 23:14:58,601] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-05T23:14:22.158039682Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.592003ms policy-pap | [2024-02-05T23:15:19.063+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping timer policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,612] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-05T23:14:22.163358058Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-pap | [2024-02-05T23:15:19.063+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=687caf5b-4d92-42de-acdb-f82aab7cc43c, expireMs=1707174948972] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) kafka | [2024-02-05 23:14:58,612] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-05T23:14:22.172014755Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.656226ms policy-pap | [2024-02-05T23:15:19.063+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping listener policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,612] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.175336192Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-pap | [2024-02-05T23:15:19.064+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopped policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.176479438Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.143227ms kafka | [2024-02-05 23:14:58,612] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.065+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.181125524Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-02-05 23:14:58,612] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"687caf5b-4d92-42de-acdb-f82aab7cc43c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"97dae9b3-163a-482b-805c-f915fcf0db7a","timestampMs":1707174919025,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql grafana | logger=migrator t=2024-02-05T23:14:22.29747907Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=116.351766ms kafka | [2024-02-05 23:14:58,658] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.065+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 687caf5b-4d92-42de-acdb-f82aab7cc43c policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.300983398Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate successful grafana | logger=migrator t=2024-02-05T23:14:22.301817736Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=834.447µs kafka | [2024-02-05 23:14:58,658] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 start publishing next request grafana | logger=migrator t=2024-02-05T23:14:22.305243646Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" kafka | [2024-02-05 23:14:58,658] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange starting grafana | logger=migrator t=2024-02-05T23:14:22.307252708Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.008492ms kafka | [2024-02-05 23:14:58,659] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange starting listener grafana | logger=migrator t=2024-02-05T23:14:22.310936466Z level=info msg="Executing migration" id="add primary key to seed_assigment" kafka | [2024-02-05 23:14:58,659] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange starting timer grafana | logger=migrator t=2024-02-05T23:14:22.344530921Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=33.594875ms kafka | [2024-02-05 23:14:58,666] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0210-sequence.sql policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=54fa79c2-9992-4631-b988-2a9cecf2df7f, expireMs=1707174949070] grafana | logger=migrator t=2024-02-05T23:14:22.348035819Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-02-05 23:14:58,667] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange starting enqueue grafana | logger=migrator t=2024-02-05T23:14:22.34825605Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=222.671µs kafka | [2024-02-05 23:14:58,667] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange started grafana | logger=migrator t=2024-02-05T23:14:22.351205333Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-02-05 23:14:58,667] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=54fa79c2-9992-4631-b988-2a9cecf2df7f, expireMs=1707174949070] kafka | [2024-02-05 23:14:58,667] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.351406978Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=201.706µs policy-pap | [2024-02-05T23:15:19.071+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,674] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.35439459Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-02-05 23:14:58,674] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"54fa79c2-9992-4631-b988-2a9cecf2df7f","timestampMs":1707174918953,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0220-sequence.sql grafana | logger=migrator t=2024-02-05T23:14:22.354611089Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=216.268µs kafka | [2024-02-05 23:14:58,674] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.082+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.357672427Z level=info msg="Executing migration" id="create folder table" kafka | [2024-02-05 23:14:58,674] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"54fa79c2-9992-4631-b988-2a9cecf2df7f","timestampMs":1707174918953,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-02-05T23:14:22.359160651Z level=info msg="Migration successfully executed" id="create folder table" duration=1.487884ms kafka | [2024-02-05 23:14:58,674] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:15:19.082+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.404136495Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-02-05 23:14:58,680] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.089+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.406149388Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.013133ms kafka | [2024-02-05 23:14:58,680] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"54fa79c2-9992-4631-b988-2a9cecf2df7f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d0f9af2d-ed17-4886-8d33-16b52a441775","timestampMs":1707174919081,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.411017704Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-02-05 23:14:58,680] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.090+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 54fa79c2-9992-4631-b988-2a9cecf2df7f policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-02-05T23:14:22.412225755Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.214302ms kafka | [2024-02-05 23:14:58,680] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.102+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.41647137Z level=info msg="Executing migration" id="Update folder title length" kafka | [2024-02-05 23:14:58,680] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"54fa79c2-9992-4631-b988-2a9cecf2df7f","timestampMs":1707174918953,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-02-05T23:14:22.416497116Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.506µs kafka | [2024-02-05 23:14:58,689] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.102+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.420988595Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" kafka | [2024-02-05 23:14:58,689] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:19.106+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.422283857Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.294202ms kafka | [2024-02-05 23:14:58,689] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"54fa79c2-9992-4631-b988-2a9cecf2df7f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d0f9af2d-ed17-4886-8d33-16b52a441775","timestampMs":1707174919081,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.426594366Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-02-05 23:14:58,689] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopping policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-02-05T23:14:22.427969236Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.37769ms kafka | [2024-02-05 23:14:58,689] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopping enqueue policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.431954802Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" kafka | [2024-02-05 23:14:58,696] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopping timer policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) grafana | logger=migrator t=2024-02-05T23:14:22.434608139Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.652747ms kafka | [2024-02-05 23:14:58,696] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=54fa79c2-9992-4631-b988-2a9cecf2df7f, expireMs=1707174949070] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.438475229Z level=info msg="Executing migration" id="Sync dashboard and folder table" kafka | [2024-02-05 23:14:58,696] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopping listener policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.439172355Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=698.747µs kafka | [2024-02-05 23:14:58,696] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopped policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.442416205Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" kafka | [2024-02-05 23:14:58,697] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange successful policy-db-migrator | > upgrade 0120-toscatrigger.sql grafana | logger=migrator t=2024-02-05T23:14:22.442718083Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=300.338µs kafka | [2024-02-05 23:14:58,702] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 start publishing next request policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.448386587Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-02-05 23:14:58,702] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting policy-db-migrator | DROP TABLE IF EXISTS toscatrigger grafana | logger=migrator t=2024-02-05T23:14:22.449220476Z level=info msg="Migration successfully executed" id="create anon_device table" duration=834.388µs kafka | [2024-02-05 23:14:58,703] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting listener policy-db-migrator | -------------- policy-db-migrator | kafka | [2024-02-05 23:14:58,703] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-05T23:14:22.453191558Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.454035557Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=843.869µs kafka | [2024-02-05 23:14:58,703] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting timer policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-02-05T23:14:22.458205006Z level=info msg="Executing migration" id="add index anon_device.updated_at" kafka | [2024-02-05 23:14:58,709] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=39c1570c-dd05-4983-bede-a1e58213f1cf, expireMs=1707174949107] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.459209411Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.004466ms kafka | [2024-02-05 23:14:58,709] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting enqueue policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB grafana | logger=migrator t=2024-02-05T23:14:22.463861878Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-02-05 23:14:58,709] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate started policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.464558434Z level=info msg="Migration successfully executed" id="create signing_key table" duration=696.397µs kafka | [2024-02-05 23:14:58,709] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-05 23:14:58,709] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39c1570c-dd05-4983-bede-a1e58213f1cf","timestampMs":1707174919093,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.467382479Z level=info msg="Executing migration" id="add unique index signing_key.key_id" kafka | [2024-02-05 23:14:58,717] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.123+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.468310428Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=927.908µs kafka | [2024-02-05 23:14:58,718] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39c1570c-dd05-4983-bede-a1e58213f1cf","timestampMs":1707174919093,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0140-toscaparameter.sql grafana | logger=migrator t=2024-02-05T23:14:22.471205879Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" kafka | [2024-02-05 23:14:58,718] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.123+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.472018272Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=808.993µs kafka | [2024-02-05 23:14:58,718] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.127+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | DROP TABLE IF EXISTS toscaparameter grafana | logger=migrator t=2024-02-05T23:14:22.477869958Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" kafka | [2024-02-05 23:14:58,718] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39c1570c-dd05-4983-bede-a1e58213f1cf","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"596c9de2-f9f1-49de-b6fb-fa0fd5c472b5","timestampMs":1707174919116,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.478090058Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=220.779µs kafka | [2024-02-05 23:14:58,726] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.127+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.480987019Z level=info msg="Executing migration" id="Add folder_uid for dashboard" kafka | [2024-02-05 23:14:58,726] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping enqueue policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.487689586Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.702597ms kafka | [2024-02-05 23:14:58,726] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping timer policy-db-migrator | > upgrade 0150-toscaproperty.sql grafana | logger=migrator t=2024-02-05T23:14:22.490752545Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-02-05 23:14:58,727] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=39c1570c-dd05-4983-bede-a1e58213f1cf, expireMs=1707174949107] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.491373485Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=615.55µs kafka | [2024-02-05 23:14:58,727] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping listener policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints grafana | logger=migrator t=2024-02-05T23:14:22.496266975Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" kafka | [2024-02-05 23:14:58,739] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopped policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.497484559Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.217424ms kafka | [2024-02-05 23:14:58,741] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:19.129+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | grafana | logger=migrator t=2024-02-05T23:14:22.500499397Z level=info msg="Executing migration" id="create sso_setting table" kafka | [2024-02-05 23:14:58,741] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39c1570c-dd05-4983-bede-a1e58213f1cf","timestampMs":1707174919093,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-05T23:14:22.501453912Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=954.264µs kafka | [2024-02-05 23:14:58,741] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-pap | [2024-02-05T23:15:19.129+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-02-05T23:14:22.505865344Z level=info msg="Executing migration" id="copy kvstore migration status to each org" kafka | [2024-02-05 23:14:58,741] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:15:19.131+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-02-05T23:14:22.506847185Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=982.82µs policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,748] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39c1570c-dd05-4983-bede-a1e58213f1cf","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"596c9de2-f9f1-49de-b6fb-fa0fd5c472b5","timestampMs":1707174919116,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-02-05T23:14:22.511264749Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-db-migrator | kafka | [2024-02-05 23:14:58,748] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:19.131+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 39c1570c-dd05-4983-bede-a1e58213f1cf grafana | logger=migrator t=2024-02-05T23:14:22.511523797Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=259.388µs policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,748] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.133+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate successful grafana | logger=migrator t=2024-02-05T23:14:22.514682047Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.107196231s policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-02-05 23:14:58,748] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:19.133+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 has no more requests grafana | logger=sqlstore t=2024-02-05T23:14:22.533242131Z level=info msg="Created default admin" user=admin policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,748] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-05T23:15:25.712+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls grafana | logger=sqlstore t=2024-02-05T23:14:22.533622136Z level=info msg="Created default organization" policy-db-migrator | kafka | [2024-02-05 23:14:58,754] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:25.719+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls grafana | logger=secrets t=2024-02-05T23:14:22.545078043Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 policy-db-migrator | kafka | [2024-02-05 23:14:58,755] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:26.129+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup grafana | logger=plugin.store t=2024-02-05T23:14:22.561377648Z level=info msg="Loading plugins..." policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-02-05 23:14:58,755] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:26.665+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup policy-db-migrator | -------------- grafana | logger=local.finder t=2024-02-05T23:14:22.597076456Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-02-05 23:14:58,755] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:26.665+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY grafana | logger=plugin.store t=2024-02-05T23:14:22.597132629Z level=info msg="Plugins loaded" count=55 duration=35.756221ms kafka | [2024-02-05 23:14:58,755] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=query_data t=2024-02-05T23:14:22.599222819Z level=info msg="Query Service initialization" kafka | [2024-02-05 23:14:58,766] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-05T23:15:27.185+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group testGroup policy-db-migrator | grafana | logger=live.push_http t=2024-02-05T23:14:22.602982634Z level=info msg="Live Push Gateway initialization" kafka | [2024-02-05 23:14:58,767] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-05T23:15:27.405+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering a deploy for policy onap.restart.tca 1.0.0 policy-db-migrator | -------------- grafana | logger=ngalert.migration t=2024-02-05T23:14:22.609985149Z level=info msg=Starting kafka | [2024-02-05 23:14:58,767] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:27.498+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) grafana | logger=ngalert.migration orgID=1 t=2024-02-05T23:14:22.611741325Z level=info msg="Migrating alerts for organisation" kafka | [2024-02-05 23:14:58,767] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-05T23:15:27.498+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group testGroup policy-db-migrator | -------------- grafana | logger=ngalert.migration orgID=1 t=2024-02-05T23:14:22.613628298Z level=info msg="Alerts found to migrate" alerts=0 policy-pap | [2024-02-05T23:15:27.499+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group testGroup kafka | [2024-02-05 23:14:58,767] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-05T23:14:22.617510742Z level=info msg="Completed legacy migration" policy-pap | [2024-02-05T23:15:27.512+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-05T23:15:27Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-05T23:15:27Z, user=policyadmin)] kafka | [2024-02-05 23:14:58,774] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=infra.usagestats.collector t=2024-02-05T23:14:22.654013621Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 policy-pap | [2024-02-05T23:15:28.202+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup kafka | [2024-02-05 23:14:58,775] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql grafana | logger=provisioning.datasources t=2024-02-05T23:14:22.65653999Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz policy-pap | [2024-02-05T23:15:28.203+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-02-05 23:14:58,775] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=provisioning.alerting t=2024-02-05T23:14:22.669967549Z level=info msg="starting to provision alerting" policy-pap | [2024-02-05T23:15:28.203+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 kafka | [2024-02-05 23:14:58,775] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=provisioning.alerting t=2024-02-05T23:14:22.669986673Z level=info msg="finished to provision alerting" policy-pap | [2024-02-05T23:15:28.203+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup kafka | [2024-02-05 23:14:58,775] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=grafanaStorageLogger t=2024-02-05T23:14:22.670319338Z level=info msg="Storage starting" policy-pap | [2024-02-05T23:15:28.203+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup kafka | [2024-02-05 23:14:58,781] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=ngalert.state.manager t=2024-02-05T23:14:22.670456168Z level=info msg="Warming state cache for startup" policy-pap | [2024-02-05T23:15:28.215+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-05T23:15:28Z, user=policyadmin)] kafka | [2024-02-05 23:14:58,782] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=ngalert.multiorg.alertmanager t=2024-02-05T23:14:22.670920184Z level=info msg="Starting MultiOrg Alertmanager" policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup kafka | [2024-02-05 23:14:58,782] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) grafana | logger=ngalert.state.manager t=2024-02-05T23:14:22.670988079Z level=info msg="State cache has been initialized" states=0 duration=531.12µs policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup kafka | [2024-02-05 23:14:58,782] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.scheduler t=2024-02-05T23:14:22.671024037Z level=info msg="Starting scheduler" tickInterval=10s policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 kafka | [2024-02-05 23:14:58,782] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=ticker t=2024-02-05T23:14:22.671101174Z level=info msg=starting first_tick=2024-02-05T23:14:30Z policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-02-05 23:14:58,796] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=http.server t=2024-02-05T23:14:22.682082384Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup kafka | [2024-02-05 23:14:58,796] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=grafana-apiserver t=2024-02-05T23:14:22.686502057Z level=info msg="Authentication is disabled" policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup kafka | [2024-02-05 23:14:58,797] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql grafana | logger=grafana-apiserver t=2024-02-05T23:14:22.708935062Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" policy-pap | [2024-02-05T23:15:28.533+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-05T23:15:28Z, user=policyadmin)] kafka | [2024-02-05 23:14:58,797] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=grafana.update.checker t=2024-02-05T23:14:22.721740992Z level=info msg="Update check succeeded" duration=51.353229ms policy-pap | [2024-02-05T23:15:48.972+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=687caf5b-4d92-42de-acdb-f82aab7cc43c, expireMs=1707174948972] kafka | [2024-02-05 23:14:58,797] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT grafana | logger=sqlstore.transactions t=2024-02-05T23:14:22.755716663Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-pap | [2024-02-05T23:15:49.071+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=54fa79c2-9992-4631-b988-2a9cecf2df7f, expireMs=1707174949070] kafka | [2024-02-05 23:14:58,804] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=plugins.update.checker t=2024-02-05T23:14:22.76650818Z level=info msg="Update check succeeded" duration=95.51499ms policy-pap | [2024-02-05T23:15:49.078+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-02-05 23:14:58,805] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=infra.usagestats t=2024-02-05T23:16:02.683692382Z level=info msg="Usage stats are ready to report" policy-pap | [2024-02-05T23:15:49.080+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup kafka | [2024-02-05 23:14:58,805] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-02-05 23:14:58,805] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-upgrade.sql kafka | [2024-02-05 23:14:58,805] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,818] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | select 'upgrade to 1100 completed' as msg kafka | [2024-02-05 23:14:58,819] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,819] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-02-05 23:14:58,819] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | msg kafka | [2024-02-05 23:14:58,820] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | upgrade to 1100 completed kafka | [2024-02-05 23:14:58,829] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-02-05 23:14:58,829] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql kafka | [2024-02-05 23:14:58,829] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,829] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME kafka | [2024-02-05 23:14:58,829] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | TRUNCATE TABLE sequence kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0502242314261100u 1 2024-02-05 23:14:31 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0502242314261200u 1 2024-02-05 23:14:31 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0502242314261200u 1 2024-02-05 23:14:31 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0502242314261200u 1 2024-02-05 23:14:31 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0502242314261200u 1 2024-02-05 23:14:32 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0502242314261300u 1 2024-02-05 23:14:32 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0502242314261300u 1 2024-02-05 23:14:32 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0502242314261300u 1 2024-02-05 23:14:32 policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-02-05 23:14:58,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,838] INFO [Broker id=1] Finished LeaderAndIsr request in 526ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-02-05 23:14:58,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,849] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=csBd2HU8Tmiot-5BjYrBHg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,850] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,850] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,858] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 21 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,859] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,859] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,860] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-02-05 23:14:58,859] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,860] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,860] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 24 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-05 23:14:58,894] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 82113737-2238-440a-b31e-67419d0ce49a in Empty state. Created a new member id consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,908] INFO [GroupCoordinator 1]: Preparing to rebalance group 82113737-2238-440a-b31e-67419d0ce49a in state PreparingRebalance with old generation 0 (__consumer_offsets-32) (reason: Adding new member consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,956] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:58,960] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:59,179] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 447a3058-d755-46ac-8e2e-59b142489c6a in Empty state. Created a new member id consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:14:59,182] INFO [GroupCoordinator 1]: Preparing to rebalance group 447a3058-d755-46ac-8e2e-59b142489c6a in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:15:01,922] INFO [GroupCoordinator 1]: Stabilized group 82113737-2238-440a-b31e-67419d0ce49a generation 1 (__consumer_offsets-32) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:15:01,946] INFO [GroupCoordinator 1]: Assignment received from leader consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a for group 82113737-2238-440a-b31e-67419d0ce49a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:15:01,960] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:15:01,964] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:15:02,183] INFO [GroupCoordinator 1]: Stabilized group 447a3058-d755-46ac-8e2e-59b142489c6a generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-05 23:15:02,203] INFO [GroupCoordinator 1]: Assignment received from leader consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d for group 447a3058-d755-46ac-8e2e-59b142489c6a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping policy-api ... Stopping grafana ... Stopping kafka ... Stopping simulator ... Stopping prometheus ... Stopping compose_zookeeper_1 ... Stopping mariadb ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing policy-api ... Removing policy-db-migrator ... Removing grafana ... Removing kafka ... Removing simulator ... Removing prometheus ... Removing compose_zookeeper_1 ... Removing mariadb ... Removing simulator ... done Removing prometheus ... done Removing grafana ... done Removing compose_zookeeper_1 ... done Removing policy-apex-pdp ... done Removing policy-api ... done Removing policy-db-migrator ... done Removing mariadb ... done Removing policy-pap ... done Removing kafka ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.Hjz3EwQKXg ]] + rsync -av /tmp/tmp.Hjz3EwQKXg/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 910,202 bytes received 95 bytes 1,820,594.00 bytes/sec total size is 909,656 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2118 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12425816479566449737.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4593238408155188927.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6683213572039224443.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NYvV from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7532453900968516086.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config8894190360013508123tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8426494187122072295.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17052616562774512711.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NYvV from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2464292623418287803.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5702974967367551005.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NYvV from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins8577616985268140686.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NYvV from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.5.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1562 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-2890 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 850 24624 0 6692 30860 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:80:f1:a3 brd ff:ff:ff:ff:ff:ff inet 10.30.107.11/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85929sec preferred_lft 85929sec inet6 fe80::f816:3eff:fe80:f1a3/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:64:91:67:cd brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-2890) 02/05/24 _x86_64_ (8 CPU) 23:10:26 LINUX RESTART (8 CPU) 23:11:01 tps rtps wtps bread/s bwrtn/s 23:12:01 109.07 36.49 72.57 1699.98 24459.39 23:13:01 127.31 23.06 104.25 2766.34 30184.17 23:14:01 210.23 0.18 210.05 20.66 115451.29 23:15:01 360.86 12.68 348.18 810.30 76792.28 23:16:01 17.98 0.07 17.91 3.07 18053.29 23:17:01 23.50 0.10 23.40 15.06 19102.60 23:18:01 80.99 1.95 79.04 112.11 21355.47 Average: 132.85 10.65 122.20 775.36 43628.36 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 30101604 31715608 2837616 8.61 69980 1853844 1401444 4.12 857392 1689480 188036 23:13:01 28968892 31657132 3970328 12.05 98528 2863992 1581924 4.65 1000000 2603596 839656 23:14:01 25208284 31643476 7730936 23.47 142188 6406236 1588644 4.67 1049880 6142916 1554048 23:15:01 23077844 29631520 9861376 29.94 158120 6486928 9000532 26.48 3233736 6007984 1372 23:16:01 23029608 29584476 9909612 30.08 158300 6487480 8838720 26.01 3284564 6004508 272 23:17:01 23071980 29653684 9867240 29.96 158684 6515856 8096960 23.82 3231740 6019036 196 23:18:01 25278060 31662748 7661160 23.26 162404 6331184 1482948 4.36 1236320 5866040 31676 Average: 25533753 30792663 7405467 22.48 135458 5277931 4570167 13.45 1984805 4904794 373608 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 ens3 104.32 72.34 1072.47 15.28 0.00 0.00 0.00 0.00 23:12:01 lo 1.73 1.73 0.18 0.18 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 br-616c88bdd522 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 ens3 168.69 114.13 4266.81 12.85 0.00 0.00 0.00 0.00 23:13:01 lo 6.33 6.33 0.59 0.59 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 br-616c88bdd522 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 ens3 1173.42 627.46 31165.96 46.26 0.00 0.00 0.00 0.00 23:14:01 lo 7.47 7.47 0.74 0.74 0.00 0.00 0.00 0.00 23:15:01 veth451b918 27.00 25.15 11.56 16.27 0.00 0.00 0.00 0.00 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:15:01 veth87408d9 0.15 0.50 0.01 0.03 0.00 0.00 0.00 0.00 23:15:01 br-616c88bdd522 0.90 0.80 0.07 0.32 0.00 0.00 0.00 0.00 23:16:01 veth451b918 26.51 22.30 8.44 24.17 0.00 0.00 0.00 0.00 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:16:01 veth87408d9 0.48 0.47 0.05 1.48 0.00 0.00 0.00 0.00 23:16:01 br-616c88bdd522 2.10 2.38 1.81 1.73 0.00 0.00 0.00 0.00 23:17:01 veth451b918 0.40 0.47 0.59 0.03 0.00 0.00 0.00 0.00 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 br-616c88bdd522 1.17 1.48 0.10 0.14 0.00 0.00 0.00 0.00 23:17:01 veth13c067f 0.00 0.38 0.00 0.02 0.00 0.00 0.00 0.00 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 ens3 1869.92 1086.12 37338.04 158.32 0.00 0.00 0.00 0.00 23:18:01 lo 35.49 35.49 6.25 6.25 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 220.51 125.77 5233.66 15.87 0.00 0.00 0.00 0.00 Average: lo 4.52 4.52 0.85 0.85 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-2890) 02/05/24 _x86_64_ (8 CPU) 23:10:26 LINUX RESTART (8 CPU) 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 10.53 0.00 0.87 1.83 0.03 86.74 23:12:01 0 1.08 0.00 0.62 6.16 0.02 92.12 23:12:01 1 9.78 0.00 0.82 0.70 0.08 88.62 23:12:01 2 33.78 0.00 2.26 6.62 0.07 57.28 23:12:01 3 26.00 0.00 1.39 0.37 0.05 72.20 23:12:01 4 4.99 0.00 0.58 0.05 0.02 94.36 23:12:01 5 4.36 0.00 0.50 0.30 0.02 94.83 23:12:01 6 1.18 0.00 0.38 0.07 0.02 98.35 23:12:01 7 3.17 0.00 0.40 0.35 0.00 96.08 23:13:01 all 11.10 0.00 1.72 2.30 0.04 84.84 23:13:01 0 4.75 0.00 1.34 0.75 0.03 93.12 23:13:01 1 12.11 0.00 2.02 1.04 0.07 84.77 23:13:01 2 4.45 0.00 1.07 2.66 0.05 91.76 23:13:01 3 8.34 0.00 1.36 0.25 0.03 90.01 23:13:01 4 9.38 0.00 1.64 0.07 0.03 88.88 23:13:01 5 2.46 0.00 1.57 11.09 0.02 84.87 23:13:01 6 25.90 0.00 2.68 2.12 0.05 69.25 23:13:01 7 21.29 0.00 2.06 0.49 0.05 76.11 23:14:01 all 11.74 0.00 5.26 8.15 0.07 74.78 23:14:01 0 13.56 0.00 5.50 4.72 0.08 76.13 23:14:01 1 12.92 0.00 5.97 0.73 0.07 80.31 23:14:01 2 9.33 0.00 6.56 0.36 0.08 83.67 23:14:01 3 11.61 0.00 3.03 3.10 0.07 82.18 23:14:01 4 12.30 0.00 5.27 0.81 0.05 81.56 23:14:01 5 9.67 0.00 5.23 23.61 0.07 61.42 23:14:01 6 13.88 0.00 4.87 12.04 0.09 69.13 23:14:01 7 10.60 0.00 5.61 19.98 0.09 63.72 23:15:01 all 25.50 0.00 3.63 4.76 0.09 66.02 23:15:01 0 28.95 0.00 3.77 1.07 0.10 66.11 23:15:01 1 27.60 0.00 3.89 1.46 0.08 66.97 23:15:01 2 23.85 0.00 3.46 5.06 0.07 67.57 23:15:01 3 21.86 0.00 3.02 0.27 0.07 74.78 23:15:01 4 28.16 0.00 3.97 1.04 0.10 66.73 23:15:01 5 26.07 0.00 3.32 2.71 0.07 67.83 23:15:01 6 22.67 0.00 3.87 20.87 0.10 52.49 23:15:01 7 24.83 0.00 3.76 5.63 0.08 65.70 23:16:01 all 6.75 0.00 0.56 0.98 0.06 91.65 23:16:01 0 7.87 0.00 0.58 0.00 0.05 91.50 23:16:01 1 6.23 0.00 0.63 0.22 0.08 92.84 23:16:01 2 7.57 0.00 0.55 7.44 0.07 84.37 23:16:01 3 6.99 0.00 0.60 0.00 0.07 92.35 23:16:01 4 7.96 0.00 0.67 0.00 0.03 91.34 23:16:01 5 5.17 0.00 0.50 0.07 0.07 94.19 23:16:01 6 7.10 0.00 0.50 0.00 0.05 92.35 23:16:01 7 5.13 0.00 0.47 0.12 0.08 94.20 23:17:01 all 1.32 0.00 0.34 1.00 0.05 97.29 23:17:01 0 1.29 0.00 0.40 0.00 0.05 98.26 23:17:01 1 0.84 0.00 0.33 0.18 0.05 98.60 23:17:01 2 1.10 0.00 0.33 7.28 0.05 91.23 23:17:01 3 2.32 0.00 0.42 0.00 0.05 97.21 23:17:01 4 1.10 0.00 0.27 0.00 0.05 98.58 23:17:01 5 0.94 0.00 0.35 0.39 0.07 98.26 23:17:01 6 1.85 0.00 0.30 0.12 0.03 97.70 23:17:01 7 1.09 0.00 0.30 0.02 0.05 98.55 23:18:01 all 6.33 0.00 0.66 1.39 0.04 91.58 23:18:01 0 1.02 0.00 0.50 0.28 0.03 98.16 23:18:01 1 2.78 0.00 0.70 1.33 0.02 95.16 23:18:01 2 1.04 0.00 0.48 8.08 0.02 90.39 23:18:01 3 0.97 0.00 0.43 0.50 0.03 98.06 23:18:01 4 18.01 0.00 1.00 0.33 0.05 80.61 23:18:01 5 10.04 0.00 0.55 0.32 0.03 89.06 23:18:01 6 5.45 0.00 0.58 0.05 0.02 93.90 23:18:01 7 11.40 0.00 0.94 0.25 0.07 87.35 Average: all 10.45 0.00 1.85 2.90 0.05 84.74 Average: 0 8.33 0.00 1.81 1.85 0.05 87.96 Average: 1 10.31 0.00 2.05 0.81 0.06 86.77 Average: 2 11.59 0.00 2.09 5.37 0.06 80.90 Average: 3 11.15 0.00 1.46 0.64 0.05 86.70 Average: 4 11.68 0.00 1.90 0.33 0.05 86.04 Average: 5 8.36 0.00 1.71 5.45 0.05 84.43 Average: 6 11.12 0.00 1.87 5.00 0.05 81.96 Average: 7 11.06 0.00 1.92 3.79 0.06 83.18