Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-14039 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-735nlZoSfOFa/agent.2081 SSH_AGENT_PID=2083 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_10773109421589806918.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_10773109421589806918.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=30 Commit message: "Remove Dmaap configurations from CSITs" > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10217993218259060493.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-Zh8L lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH Generating Requirements File Python 3.10.6 pip 23.3.2 from /tmp/venv-Zh8L/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.1 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.23 botocore==1.34.23 bs4==0.0.2 cachetools==5.3.2 certifi==2023.11.17 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.5.0 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.26.2 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.4 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.1.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.1.1 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2023.3.post1 PyYAML==6.0.1 referencing==0.32.1 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins12657488408940214260.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins18200513181673086773.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.6gxXuOa3EW ++ echo ROBOT_VENV=/tmp/tmp.6gxXuOa3EW +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.6gxXuOa3EW ++ source /tmp/tmp.6gxXuOa3EW/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.6gxXuOa3EW +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.6gxXuOa3EW/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.6gxXuOa3EW) ' '!=' x ']' +++ PS1='(tmp.6gxXuOa3EW) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.6gxXuOa3EW/src/onap ++ rm -rf /tmp/tmp.6gxXuOa3EW/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.6gxXuOa3EW/bin/activate + '[' -z /tmp/tmp.6gxXuOa3EW/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.6gxXuOa3EW/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.6gxXuOa3EW ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.6gxXuOa3EW/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.6gxXuOa3EW) ' ++ '[' 'x(tmp.6gxXuOa3EW) ' '!=' x ']' ++ PS1='(tmp.6gxXuOa3EW) (tmp.6gxXuOa3EW) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.C9xkkUvsOC + cd /tmp/tmp.C9xkkUvsOC + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:eb47623eeab9aad8524ecc877b6708ae74b57f9f3cfe77554ad0d1521491cb5d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.0)... 3.1.0: Pulling from onap/policy-pap Digest: sha256:ff420a18fdd0393b657dcd1ae9e545437067fe5610606e3999888c21302a6231 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.0 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Creating mariadb ... Creating compose_zookeeper_1 ... Creating simulator ... Creating prometheus ... Creating mariadb ... done Creating policy-db-migrator ... Creating compose_zookeeper_1 ... done Creating kafka ... Creating kafka ... done Creating simulator ... done Creating policy-db-migrator ... done Creating policy-api ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating policy-api ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds grafana Up 13 seconds policy-api Up 11 seconds kafka Up 17 seconds prometheus Up 14 seconds compose_zookeeper_1 Up 18 seconds simulator Up 16 seconds mariadb Up 19 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds grafana Up 18 seconds policy-api Up 17 seconds kafka Up 22 seconds prometheus Up 19 seconds compose_zookeeper_1 Up 23 seconds simulator Up 21 seconds mariadb Up 24 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds grafana Up 23 seconds policy-api Up 22 seconds kafka Up 27 seconds prometheus Up 24 seconds compose_zookeeper_1 Up 28 seconds simulator Up 26 seconds mariadb Up 29 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds grafana Up 28 seconds policy-api Up 27 seconds kafka Up 32 seconds prometheus Up 29 seconds compose_zookeeper_1 Up 33 seconds simulator Up 31 seconds mariadb Up 34 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds grafana Up 33 seconds policy-api Up 32 seconds kafka Up 37 seconds prometheus Up 34 seconds compose_zookeeper_1 Up 38 seconds simulator Up 36 seconds mariadb Up 39 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:15:04 up 4 min, 0 users, load average: 2.98, 1.39, 0.56 Tasks: 209 total, 1 running, 130 sleeping, 0 stopped, 0 zombie %Cpu(s): 12.7 us, 2.8 sy, 0.0 ni, 78.7 id, 5.7 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' total used free shared buff/cache available Mem: 31G 2.8G 21G 1.3M 6.7G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds grafana Up 33 seconds policy-api Up 32 seconds kafka Up 37 seconds prometheus Up 34 seconds compose_zookeeper_1 Up 38 seconds simulator Up 36 seconds mariadb Up 39 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 50b647c65335 policy-apex-pdp 277.07% 190.2MiB / 31.41GiB 0.59% 7.16kB / 6.89kB 0B / 0B 49 1db58b627eec policy-pap 1.78% 520.4MiB / 31.41GiB 1.62% 26.8kB / 28.9kB 0B / 181MB 62 2c9ab507ee0b grafana 0.02% 51.51MiB / 31.41GiB 0.16% 18.2kB / 3.66kB 0B / 23.9MB 14 f9d8745ecf88 policy-api 0.29% 755.5MiB / 31.41GiB 2.35% 999kB / 710kB 0B / 0B 54 48182883a08d kafka 18.79% 361.8MiB / 31.41GiB 1.12% 64.2kB / 67.5kB 0B / 508kB 82 2fe872a509fe prometheus 0.26% 18.66MiB / 31.41GiB 0.06% 1.6kB / 474B 205kB / 0B 11 8dc9896bd9c7 compose_zookeeper_1 0.11% 98.25MiB / 31.41GiB 0.31% 52.9kB / 45.8kB 0B / 377kB 60 3707560567d3 simulator 0.09% 124.2MiB / 31.41GiB 0.39% 1.36kB / 0B 0B / 0B 76 a8df4284cc57 mariadb 0.01% 101.9MiB / 31.41GiB 0.32% 996kB / 1.18MB 11MB / 67.9MB 40 + echo + cd /tmp/tmp.C9xkkUvsOC + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.C9xkkUvsOC/output.xml Log: /tmp/tmp.C9xkkUvsOC/log.html Report: /tmp/tmp.C9xkkUvsOC/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes prometheus Up 2 minutes compose_zookeeper_1 Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:16:54 up 6 min, 0 users, load average: 0.68, 1.05, 0.53 Tasks: 200 total, 1 running, 128 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.7 us, 2.2 sy, 0.0 ni, 82.9 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.9G 21G 1.3M 6.7G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes prometheus Up 2 minutes compose_zookeeper_1 Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 50b647c65335 policy-apex-pdp 2.24% 182.6MiB / 31.41GiB 0.57% 56.1kB / 90.4kB 0B / 0B 50 1db58b627eec policy-pap 1.10% 490.5MiB / 31.41GiB 1.52% 2.33MB / 815kB 0B / 181MB 64 2c9ab507ee0b grafana 0.03% 52.84MiB / 31.41GiB 0.16% 19.2kB / 4.69kB 0B / 23.9MB 14 f9d8745ecf88 policy-api 0.09% 774MiB / 31.41GiB 2.41% 2.49MB / 1.26MB 0B / 0B 55 48182883a08d kafka 1.21% 379.5MiB / 31.41GiB 1.18% 233kB / 210kB 0B / 606kB 83 2fe872a509fe prometheus 0.44% 25.57MiB / 31.41GiB 0.08% 191kB / 11kB 205kB / 0B 13 8dc9896bd9c7 compose_zookeeper_1 0.14% 98.3MiB / 31.41GiB 0.31% 55.7kB / 47.3kB 0B / 377kB 60 3707560567d3 simulator 0.07% 124.2MiB / 31.41GiB 0.39% 1.58kB / 0B 0B / 0B 76 a8df4284cc57 mariadb 0.01% 103.2MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.2MB 28 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, grafana, policy-api, kafka, policy-db-migrator, prometheus, compose_zookeeper_1, simulator, mariadb mariadb | 2024-01-21 23:14:24+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-21 23:14:24+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-01-21 23:14:24+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-21 23:14:25+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-01-21 23:14:25 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-21 23:14:25 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-21 23:14:25 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-01-21 23:14:26+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-01-21 23:14:26+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-01-21 23:14:26+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-01-21 23:14:26 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-21 23:14:26 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-21 23:14:26 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-21 23:14:26 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-01-21 23:14:26 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-21 23:14:26 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-21 23:14:26 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-21 23:14:26 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-21 23:14:26 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-01-21 23:14:30,376] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,376] INFO Client environment:host.name=48182883a08d (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,376] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,376] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,376] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,376] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,377] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,378] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,378] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,380] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,384] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-21 23:14:30,388] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-21 23:14:30,395] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-21 23:14:30,411] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-21 23:14:30,411] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-21 23:14:30,421] INFO Socket connection established, initiating session, client: /172.17.0.7:55618, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-21 23:14:30,459] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003ef040000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-21 23:14:30,583] INFO Session: 0x1000003ef040000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:30,583] INFO EventThread shut down for session: 0x1000003ef040000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-01-21 23:14:31,245] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-01-21 23:14:31,557] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-21 23:14:31,650] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-01-21 23:14:31,651] INFO starting (kafka.server.KafkaServer) kafka | [2024-01-21 23:14:31,652] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-01-21 23:14:31,668] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-21 23:14:31,673] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-21 23:14:27+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-01-21 23:14:29+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-01-21 23:14:29+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-01-21 23:14:29+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-01-21 23:14:29+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | [2024-01-21 23:14:31,673] INFO Client environment:host.name=48182883a08d (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-01-21 23:14:28,998] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,005] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,006] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,006] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,006] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,007] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-21 23:14:29,007] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-21 23:14:29,007] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-21 23:14:29,007] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-01-21 23:14:29,008] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-01-21 23:14:29,009] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,010] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,010] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,010] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,010] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-21 23:14:29,010] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-01-21 23:14:29,024] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-01-21 23:14:29,026] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-21 23:14:29,026] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-21 23:14:29,029] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-21 23:14:29,039] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,039] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:host.name=8dc9896bd9c7 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-01-21 23:14:30+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Buffer pool(s) dump completed at 240121 23:14:30 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Shutdown completed; log sequence number 332242; transaction id 298 mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-01-21 23:14:30+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-01-21 23:14:30+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-21 23:14:30 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-21 23:14:30 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: log sequence number 332242; transaction id 299 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-01-21 23:14:30 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-21 23:14:30 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,042] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-01-21 23:14:29,043] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,043] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,044] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-21 23:14:29,044] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-21 23:14:29,047] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-21 23:14:29,047] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.5:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.4) policy-api | policy-api | [2024-01-21T23:14:39.260+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 16 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-01-21T23:14:39.262+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-01-21T23:14:41.158+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-01-21T23:14:41.264+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 6 JPA repository interfaces. policy-api | [2024-01-21T23:14:41.728+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-21T23:14:41.728+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-21T23:14:42.467+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-01-21T23:14:42.479+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-21T23:14:42.481+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-01-21T23:14:42.481+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-api | [2024-01-21T23:14:42.595+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-01-21T23:14:42.595+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3250 ms policy-api | [2024-01-21T23:14:43.083+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-01-21T23:14:43.178+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-01-21T23:14:43.182+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-01-21T23:14:43.237+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-01-21T23:14:43.610+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-01-21T23:14:43.632+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-01-21T23:14:43.743+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 policy-api | [2024-01-21T23:14:43.746+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.5:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.7:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.4) policy-pap | policy-pap | [2024-01-21T23:14:53.333+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 29 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-01-21T23:14:53.335+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-01-21T23:14:55.253+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-01-21T23:14:55.365+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 101 ms. Found 7 JPA repository interfaces. policy-pap | [2024-01-21T23:14:55.774+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-01-21T23:14:55.775+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-01-21T23:14:56.557+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-01-21T23:14:56.568+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-01-21T23:14:56.572+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-01-21T23:14:56.572+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-pap | [2024-01-21T23:14:56.684+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-01-21T23:14:56.684+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3274 ms policy-pap | [2024-01-21T23:14:57.181+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-01-21T23:14:57.281+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-pap | [2024-01-21T23:14:57.285+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer zookeeper_1 | [2024-01-21 23:14:29,047] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) grafana | logger=settings t=2024-01-21T23:14:30.737872041Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z policy-apex-pdp | Waiting for mariadb port 3306... mariadb | 2024-01-21 23:14:30 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-api | [2024-01-21T23:14:43.778+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-pap | [2024-01-21T23:14:57.350+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2024-01-21T23:14:57.715+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer grafana | logger=settings t=2024-01-21T23:14:30.738085243Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini policy-db-migrator | Waiting for mariadb port 3306... policy-apex-pdp | Waiting for kafka port 9092... mariadb | 2024-01-21 23:14:30 0 [Note] Server socket created on IP: '0.0.0.0'. kafka | [2024-01-21 23:14:31,674] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d policy-api | [2024-01-21T23:14:43.780+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json zookeeper_1 | [2024-01-21 23:14:29,047] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) policy-pap | [2024-01-21T23:14:57.738+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... grafana | logger=settings t=2024-01-21T23:14:30.738096203Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-apex-pdp | mariadb (172.17.0.5:3306) open mariadb | 2024-01-21 23:14:30 0 [Note] Server socket created on IP: '::'. kafka | [2024-01-21 23:14:31,675] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@32193bea (org.apache.zookeeper.ZooKeeper) prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" policy-api | [2024-01-21T23:14:45.841+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) simulator | overriding logback.xml zookeeper_1 | [2024-01-21 23:14:29,048] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-01-21T23:14:57.865+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2b03d52f grafana | logger=settings t=2024-01-21T23:14:30.738100253Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-apex-pdp | kafka (172.17.0.7:9092) open mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd: ready for connections. kafka | [2024-01-21 23:14:31,679] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" policy-api | [2024-01-21T23:14:45.845+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' simulator | 2024-01-21 23:14:28,196 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json zookeeper_1 | [2024-01-21 23:14:29,083] INFO Logging initialized @605ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) policy-pap | [2024-01-21T23:14:57.868+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. grafana | logger=settings t=2024-01-21T23:14:30.738104703Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-apex-pdp | Waiting for pap port 6969... mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution kafka | [2024-01-21 23:14:31,686] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-api | [2024-01-21T23:14:47.199+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml simulator | 2024-01-21 23:14:28,275 INFO org.onap.policy.models.simulators starting zookeeper_1 | [2024-01-21 23:14:29,220] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) policy-pap | [2024-01-21T23:14:57.921+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) grafana | logger=settings t=2024-01-21T23:14:30.738107443Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! policy-apex-pdp | pap (172.17.0.10:6969) open mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Buffer pool(s) load completed at 240121 23:14:30 kafka | [2024-01-21 23:14:31,688] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" policy-api | [2024-01-21T23:14:48.067+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] simulator | 2024-01-21 23:14:28,275 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties zookeeper_1 | [2024-01-21 23:14:29,220] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) policy-pap | [2024-01-21T23:14:57.923+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead grafana | logger=settings t=2024-01-21T23:14:30.738110693Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" policy-db-migrator | 321 blocks policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' mariadb | 2024-01-21 23:14:31 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) kafka | [2024-01-21 23:14:31,696] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-api | [2024-01-21T23:14:49.254+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning simulator | 2024-01-21 23:14:28,497 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION zookeeper_1 | [2024-01-21 23:14:29,240] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) policy-pap | [2024-01-21T23:14:59.973+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) grafana | logger=settings t=2024-01-21T23:14:30.738113934Z level=info msg="Config overridden from command line" arg="default.log.mode=console" policy-db-migrator | Preparing upgrade release version: 0800 policy-apex-pdp | [2024-01-21T23:15:04.751+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] mariadb | 2024-01-21 23:14:31 15 [Warning] Aborted connection 15 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) kafka | [2024-01-21 23:14:31,704] INFO Socket connection established, initiating session, client: /172.17.0.7:55620, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) prometheus | ts=2024-01-21T23:14:29.713Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 policy-api | [2024-01-21T23:14:49.492+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@58a01e47, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6149184e, org.springframework.security.web.context.SecurityContextHolderFilter@234a08ea, org.springframework.security.web.header.HeaderWriterFilter@2e26841f, org.springframework.security.web.authentication.logout.LogoutFilter@c7a7d3, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3413effc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56d3e4a9, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2542d320, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6f3a8d5e, org.springframework.security.web.access.ExceptionTranslationFilter@19bd1f98, org.springframework.security.web.access.intercept.AuthorizationFilter@729f8c5d] simulator | 2024-01-21 23:14:28,499 INFO org.onap.policy.models.simulators starting A&AI simulator zookeeper_1 | [2024-01-21 23:14:29,278] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) policy-pap | [2024-01-21T23:14:59.977+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' grafana | logger=settings t=2024-01-21T23:14:30.738117864Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" policy-db-migrator | Preparing upgrade release version: 0900 policy-apex-pdp | [2024-01-21T23:15:04.959+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: mariadb | 2024-01-21 23:14:32 61 [Warning] Aborted connection 61 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) kafka | [2024-01-21 23:14:31,713] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003ef040001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) prometheus | ts=2024-01-21T23:14:29.714Z caller=main.go:1039 level=info msg="Starting TSDB ..." policy-api | [2024-01-21T23:14:50.506+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' simulator | 2024-01-21 23:14:28,621 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-pap | [2024-01-21T23:15:00.625+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository grafana | logger=settings t=2024-01-21T23:14:30.738121414Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" policy-db-migrator | Preparing upgrade release version: 1000 policy-apex-pdp | allow.auto.create.topics = true mariadb | 2024-01-21 23:14:33 108 [Warning] Aborted connection 108 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) kafka | [2024-01-21 23:14:31,721] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) prometheus | ts=2024-01-21T23:14:29.717Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 policy-api | [2024-01-21T23:14:50.572+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] simulator | 2024-01-21 23:14:28,632 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper_1 | [2024-01-21 23:14:29,278] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) policy-pap | [2024-01-21T23:15:01.244+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository grafana | logger=settings t=2024-01-21T23:14:30.738125274Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" policy-db-migrator | Preparing upgrade release version: 1100 policy-apex-pdp | auto.commit.interval.ms = 5000 kafka | [2024-01-21 23:14:32,064] INFO Cluster ID = -jrszSKtSKq5TnXDeh3xeA (kafka.server.KafkaServer) prometheus | ts=2024-01-21T23:14:29.717Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 policy-api | [2024-01-21T23:14:50.595+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' simulator | 2024-01-21 23:14:28,635 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper_1 | [2024-01-21 23:14:29,279] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) policy-pap | [2024-01-21T23:15:01.375+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository grafana | logger=settings t=2024-01-21T23:14:30.738134194Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" policy-db-migrator | Preparing upgrade release version: 1200 policy-apex-pdp | auto.include.jmx.reporter = true kafka | [2024-01-21 23:14:32,068] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) prometheus | ts=2024-01-21T23:14:29.724Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-api | [2024-01-21T23:14:50.612+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.126 seconds (process running for 12.724) simulator | 2024-01-21 23:14:28,641 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 zookeeper_1 | [2024-01-21 23:14:29,284] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) policy-pap | [2024-01-21T23:15:01.691+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=settings t=2024-01-21T23:14:30.738138094Z level=info msg=Target target=[all] policy-db-migrator | Preparing upgrade release version: 1300 policy-apex-pdp | auto.offset.reset = latest kafka | [2024-01-21 23:14:32,124] INFO KafkaConfig values: prometheus | ts=2024-01-21T23:14:29.724Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=6.12µs policy-api | [2024-01-21T23:15:07.412+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' simulator | 2024-01-21 23:14:28,730 INFO Session workerName=node0 zookeeper_1 | [2024-01-21 23:14:29,293] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) policy-pap | allow.auto.create.topics = true grafana | logger=settings t=2024-01-21T23:14:30.738145864Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-01-21T23:14:30.738149064Z level=info msg="Path Data" path=/var/lib/grafana policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-db-migrator | Done policy-db-migrator | name version policy-api | [2024-01-21T23:15:07.412+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-01-21T23:15:07.415+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms zookeeper_1 | [2024-01-21 23:14:29,311] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) simulator | 2024-01-21 23:14:29,463 INFO Using GSON for REST calls grafana | logger=settings t=2024-01-21T23:14:30.738157444Z level=info msg="Path Logs" path=/var/log/grafana kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 policy-apex-pdp | check.crcs = true prometheus | ts=2024-01-21T23:14:29.724Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-db-migrator | policyadmin 0 policy-api | [2024-01-21T23:15:07.682+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-pap | auto.commit.interval.ms = 5000 zookeeper_1 | [2024-01-21 23:14:29,312] INFO Started @835ms (org.eclipse.jetty.server.Server) simulator | 2024-01-21 23:14:29,536 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} grafana | logger=settings t=2024-01-21T23:14:30.738160594Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins kafka | alter.config.policy.class.name = null policy-apex-pdp | client.dns.lookup = use_all_dns_ips prometheus | ts=2024-01-21T23:14:29.725Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-api | [] policy-pap | auto.include.jmx.reporter = true zookeeper_1 | [2024-01-21 23:14:29,312] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) simulator | 2024-01-21 23:14:29,543 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} grafana | logger=settings t=2024-01-21T23:14:30.738164734Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning kafka | alter.log.dirs.replication.quota.window.num = 11 policy-apex-pdp | client.id = consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-1 prometheus | ts=2024-01-21T23:14:29.725Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=109.951µs wal_replay_duration=489.655µs wbl_replay_duration=400ns total_replay_duration=856.708µs policy-db-migrator | upgrade: 0 -> 1300 policy-pap | auto.offset.reset = latest zookeeper_1 | [2024-01-21 23:14:29,319] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) simulator | 2024-01-21 23:14:29,551 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1837ms grafana | logger=settings t=2024-01-21T23:14:30.738170624Z level=info msg="App mode production" kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 policy-apex-pdp | client.rack = prometheus | ts=2024-01-21T23:14:29.727Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC policy-db-migrator | policy-pap | bootstrap.servers = [kafka:9092] zookeeper_1 | [2024-01-21 23:14:29,320] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) simulator | 2024-01-21 23:14:29,552 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4083 ms. grafana | logger=sqlstore t=2024-01-21T23:14:30.738444237Z level=info msg="Connecting to DB" dbtype=sqlite3 kafka | authorizer.class.name = policy-apex-pdp | connections.max.idle.ms = 540000 prometheus | ts=2024-01-21T23:14:29.727Z caller=main.go:1063 level=info msg="TSDB started" policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-pap | check.crcs = true zookeeper_1 | [2024-01-21 23:14:29,321] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) simulator | 2024-01-21 23:14:29,561 INFO org.onap.policy.models.simulators starting SDNC simulator grafana | logger=sqlstore t=2024-01-21T23:14:30.738463937Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db kafka | auto.create.topics.enable = true policy-apex-pdp | default.api.timeout.ms = 60000 prometheus | ts=2024-01-21T23:14:29.727Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips zookeeper_1 | [2024-01-21 23:14:29,322] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) simulator | 2024-01-21 23:14:29,574 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=migrator t=2024-01-21T23:14:30.739035233Z level=info msg="Starting DB migrations" kafka | auto.include.jmx.reporter = true policy-apex-pdp | enable.auto.commit = true prometheus | ts=2024-01-21T23:14:29.728Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=885.889µs db_storage=1.69µs remote_storage=2.32µs web_handler=670ns query_engine=1.4µs scrape=217.172µs scrape_sd=98.811µs notify=28.811µs notify_sd=21.09µs rules=2.36µs tracing=12.7µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-pap | client.id = consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-1 zookeeper_1 | [2024-01-21 23:14:29,342] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) simulator | 2024-01-21 23:14:29,577 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-01-21T23:14:30.739909751Z level=info msg="Executing migration" id="create migration_log table" kafka | auto.leader.rebalance.enable = true policy-apex-pdp | exclude.internal.topics = true prometheus | ts=2024-01-21T23:14:29.728Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." policy-db-migrator | -------------- policy-pap | client.rack = zookeeper_1 | [2024-01-21 23:14:29,342] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) simulator | 2024-01-21 23:14:29,579 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-01-21T23:14:30.740624378Z level=info msg="Migration successfully executed" id="create migration_log table" duration=715.307µs kafka | background.threads = 10 policy-apex-pdp | fetch.max.bytes = 52428800 prometheus | ts=2024-01-21T23:14:29.728Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 zookeeper_1 | [2024-01-21 23:14:29,346] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) simulator | 2024-01-21 23:14:29,580 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 grafana | logger=migrator t=2024-01-21T23:14:30.744569627Z level=info msg="Executing migration" id="create user table" kafka | broker.heartbeat.interval.ms = 2000 policy-apex-pdp | fetch.max.wait.ms = 500 policy-db-migrator | policy-pap | default.api.timeout.ms = 60000 zookeeper_1 | [2024-01-21 23:14:29,346] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) simulator | 2024-01-21 23:14:29,583 INFO Session workerName=node0 grafana | logger=migrator t=2024-01-21T23:14:30.745025961Z level=info msg="Migration successfully executed" id="create user table" duration=456.114µs kafka | broker.id = 1 policy-apex-pdp | fetch.min.bytes = 1 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | enable.auto.commit = true zookeeper_1 | [2024-01-21 23:14:29,350] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) simulator | 2024-01-21 23:14:29,642 INFO Using GSON for REST calls grafana | logger=migrator t=2024-01-21T23:14:30.750219592Z level=info msg="Executing migration" id="add unique index user.login" kafka | broker.id.generation.enable = true policy-apex-pdp | group.id = e43a1262-c2bd-4185-8b6c-0623a45ad046 policy-db-migrator | -------------- policy-pap | exclude.internal.topics = true zookeeper_1 | [2024-01-21 23:14:29,350] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) simulator | 2024-01-21 23:14:29,651 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} grafana | logger=migrator t=2024-01-21T23:14:30.75102187Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=802.558µs kafka | broker.rack = null policy-apex-pdp | group.instance.id = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-pap | fetch.max.bytes = 52428800 zookeeper_1 | [2024-01-21 23:14:29,358] INFO Snapshot loaded in 12 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) simulator | 2024-01-21 23:14:29,653 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} grafana | logger=migrator t=2024-01-21T23:14:30.754734847Z level=info msg="Executing migration" id="add unique index user.email" kafka | broker.session.timeout.ms = 9000 policy-apex-pdp | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- policy-pap | fetch.max.wait.ms = 500 zookeeper_1 | [2024-01-21 23:14:29,359] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) simulator | 2024-01-21 23:14:29,653 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1939ms grafana | logger=migrator t=2024-01-21T23:14:30.755566465Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=827.458µs kafka | client.quota.callback.class = null policy-apex-pdp | interceptor.classes = [] policy-db-migrator | policy-pap | fetch.min.bytes = 1 zookeeper_1 | [2024-01-21 23:14:29,359] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-21T23:14:30.759057099Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" simulator | 2024-01-21 23:14:29,653 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4925 ms. simulator | 2024-01-21 23:14:29,654 INFO org.onap.policy.models.simulators starting SO simulator policy-apex-pdp | internal.leave.group.on.close = true policy-pap | group.id = 0096ba3d-86d0-4a50-8361-ec89b03a0194 grafana | logger=migrator t=2024-01-21T23:14:30.760020018Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=962.849µs grafana | logger=migrator t=2024-01-21T23:14:30.766472242Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" kafka | compression.type = producer simulator | 2024-01-21 23:14:29,658 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-21T23:14:30.767136008Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=663.616µs grafana | logger=migrator t=2024-01-21T23:14:30.770857995Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" kafka | connection.failed.authentication.delay.ms = 100 simulator | 2024-01-21 23:14:29,659 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-apex-pdp | isolation.level = read_uncommitted policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-21T23:14:30.775686572Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.826457ms grafana | logger=migrator t=2024-01-21T23:14:30.785571669Z level=info msg="Executing migration" id="create user table v2" kafka | connections.max.idle.ms = 600000 simulator | 2024-01-21 23:14:29,660 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-21T23:14:30.786323756Z level=info msg="Migration successfully executed" id="create user table v2" duration=752.347µs grafana | logger=migrator t=2024-01-21T23:14:30.792232095Z level=info msg="Executing migration" id="create index UQE_user_login - v2" kafka | connections.max.reauth.ms = 0 simulator | 2024-01-21 23:14:29,660 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-21T23:14:30.793290385Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.05804ms grafana | logger=migrator t=2024-01-21T23:14:30.797824369Z level=info msg="Executing migration" id="create index UQE_user_email - v2" kafka | control.plane.listener.name = null simulator | 2024-01-21 23:14:29,668 INFO Session workerName=node0 policy-db-migrator | -------------- policy-apex-pdp | max.poll.interval.ms = 300000 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-01-21T23:14:30.79892325Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.098821ms grafana | logger=migrator t=2024-01-21T23:14:30.802476925Z level=info msg="Executing migration" id="copy data_source v1 to v2" kafka | controlled.shutdown.enable = true simulator | 2024-01-21 23:14:29,755 INFO Using GSON for REST calls policy-db-migrator | policy-apex-pdp | max.poll.records = 500 policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-01-21T23:14:30.803091121Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=614.066µs grafana | logger=migrator t=2024-01-21T23:14:30.809633395Z level=info msg="Executing migration" id="Drop old table user_v1" kafka | controlled.shutdown.max.retries = 3 simulator | 2024-01-21 23:14:29,770 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} policy-db-migrator | policy-apex-pdp | metadata.max.age.ms = 300000 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-21T23:14:30.8101138Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=478.315µs grafana | logger=migrator t=2024-01-21T23:14:30.815218Z level=info msg="Executing migration" id="Add column help_flags1 to user table" kafka | controlled.shutdown.retry.backoff.ms = 5000 simulator | 2024-01-21 23:14:29,771 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-apex-pdp | metric.reporters = [] policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-21T23:14:30.816989727Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.770857ms grafana | logger=migrator t=2024-01-21T23:14:30.825507781Z level=info msg="Executing migration" id="Update user table charset" kafka | controller.listener.names = null simulator | 2024-01-21 23:14:29,771 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @2057ms policy-db-migrator | -------------- policy-apex-pdp | metrics.num.samples = 2 policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-21T23:14:30.825536581Z level=info msg="Migration successfully executed" id="Update user table charset" duration=29.95µs grafana | logger=migrator t=2024-01-21T23:14:30.832330098Z level=info msg="Executing migration" id="Add last_seen_at column to user" kafka | controller.quorum.append.linger.ms = 25 simulator | 2024-01-21 23:14:29,771 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4888 ms. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-apex-pdp | metrics.recording.level = INFO policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-21T23:14:30.833430859Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.100691ms grafana | logger=migrator t=2024-01-21T23:14:30.840091194Z level=info msg="Executing migration" id="Add missing user data" kafka | controller.quorum.election.backoff.max.ms = 1000 simulator | 2024-01-21 23:14:29,772 INFO org.onap.policy.models.simulators starting VFC simulator policy-db-migrator | -------------- policy-apex-pdp | metrics.sample.window.ms = 30000 policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-21T23:14:30.840295056Z level=info msg="Migration successfully executed" id="Add missing user data" duration=206.702µs grafana | logger=migrator t=2024-01-21T23:14:30.845202434Z level=info msg="Executing migration" id="Add is_disabled column to user" kafka | controller.quorum.election.timeout.ms = 1000 zookeeper_1 | [2024-01-21 23:14:29,369] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) policy-db-migrator | policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | metric.reporters = [] simulator | 2024-01-21 23:14:29,775 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START kafka | controller.quorum.fetch.timeout.ms = 2000 grafana | logger=migrator t=2024-01-21T23:14:30.846371896Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.168372ms grafana | logger=migrator t=2024-01-21T23:14:30.85090767Z level=info msg="Executing migration" id="Add index user.login/user.email" policy-db-migrator | policy-apex-pdp | receive.buffer.bytes = 65536 policy-pap | metrics.num.samples = 2 simulator | 2024-01-21 23:14:29,775 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | controller.quorum.request.timeout.ms = 2000 zookeeper_1 | [2024-01-21 23:14:29,370] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) grafana | logger=migrator t=2024-01-21T23:14:30.852166042Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.258492ms policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-pap | metrics.recording.level = INFO simulator | 2024-01-21 23:14:29,776 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | controller.quorum.retry.backoff.ms = 20 zookeeper_1 | [2024-01-21 23:14:29,409] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) grafana | logger=migrator t=2024-01-21T23:14:30.859351673Z level=info msg="Executing migration" id="Add is_service_account column to user" policy-db-migrator | -------------- policy-apex-pdp | reconnect.backoff.ms = 50 policy-pap | metrics.sample.window.ms = 30000 simulator | 2024-01-21 23:14:29,777 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 kafka | controller.quorum.voters = [] zookeeper_1 | [2024-01-21 23:14:29,410] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) grafana | logger=migrator t=2024-01-21T23:14:30.86112435Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.772167ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-apex-pdp | request.timeout.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] simulator | 2024-01-21 23:14:29,781 INFO Session workerName=node0 kafka | controller.quota.window.num = 11 zookeeper_1 | [2024-01-21 23:14:30,439] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) grafana | logger=migrator t=2024-01-21T23:14:30.86618284Z level=info msg="Executing migration" id="Update is_service_account column to nullable" policy-db-migrator | -------------- policy-apex-pdp | retry.backoff.ms = 100 policy-pap | receive.buffer.bytes = 65536 simulator | 2024-01-21 23:14:29,830 INFO Using GSON for REST calls kafka | controller.quota.window.size.seconds = 1 grafana | logger=migrator t=2024-01-21T23:14:30.879063546Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.880716ms policy-db-migrator | policy-apex-pdp | sasl.client.callback.handler.class = null policy-pap | reconnect.backoff.max.ms = 1000 simulator | 2024-01-21 23:14:29,840 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} kafka | controller.socket.timeout.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:30.883305418Z level=info msg="Executing migration" id="create temp user table v1-7" policy-db-migrator | policy-apex-pdp | sasl.jaas.config = null policy-pap | reconnect.backoff.ms = 50 simulator | 2024-01-21 23:14:29,842 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} grafana | logger=migrator t=2024-01-21T23:14:30.884822883Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.519125ms policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | request.timeout.ms = 30000 kafka | create.topic.policy.class.name = null simulator | 2024-01-21 23:14:29,842 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @2127ms grafana | logger=migrator t=2024-01-21T23:14:30.891172655Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" policy-db-migrator | -------------- policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | retry.backoff.ms = 100 kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 grafana | logger=migrator t=2024-01-21T23:14:30.892397817Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.225202ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | sasl.client.callback.handler.class = null simulator | 2024-01-21 23:14:29,842 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4933 ms. kafka | delegation.token.expiry.time.ms = 86400000 grafana | logger=migrator t=2024-01-21T23:14:30.897616679Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" policy-db-migrator | -------------- policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.jaas.config = null simulator | 2024-01-21 23:14:29,843 INFO org.onap.policy.models.simulators started kafka | delegation.token.master.key = null grafana | logger=migrator t=2024-01-21T23:14:30.898294055Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=677.386µs policy-db-migrator | policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | delegation.token.max.lifetime.ms = 604800000 grafana | logger=migrator t=2024-01-21T23:14:30.906319804Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" policy-db-migrator | policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | delegation.token.secret.key = null grafana | logger=migrator t=2024-01-21T23:14:30.907562966Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.243162ms policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-apex-pdp | sasl.login.class = null policy-pap | sasl.kerberos.service.name = null kafka | delete.records.purgatory.purge.interval.requests = 1 grafana | logger=migrator t=2024-01-21T23:14:30.914707936Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" policy-db-migrator | -------------- policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | delete.topic.enable = true grafana | logger=migrator t=2024-01-21T23:14:30.915687056Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=978.87µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | early.start.listeners = null grafana | logger=migrator t=2024-01-21T23:14:30.920605304Z level=info msg="Executing migration" id="Update temp_user table charset" policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.callback.handler.class = null kafka | fetch.max.bytes = 57671680 grafana | logger=migrator t=2024-01-21T23:14:30.920636384Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=32.01µs policy-db-migrator | policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 kafka | fetch.purgatory.purge.interval.requests = 1000 grafana | logger=migrator t=2024-01-21T23:14:30.923553203Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" policy-db-migrator | policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.class = null kafka | group.consumer.assignors = [] grafana | logger=migrator t=2024-01-21T23:14:30.924034178Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=480.945µs policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.connect.timeout.ms = null kafka | group.consumer.heartbeat.interval.ms = 5000 grafana | logger=migrator t=2024-01-21T23:14:30.926971307Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" policy-db-migrator | -------------- policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-21T23:14:30.927544412Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=572.456µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-apex-pdp | sasl.login.retry.backoff.ms = 100 kafka | group.consumer.max.session.timeout.ms = 60000 policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-21T23:14:30.932495371Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" policy-db-migrator | -------------- policy-apex-pdp | sasl.mechanism = GSSAPI kafka | group.consumer.max.size = 2147483647 policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-21T23:14:30.933086826Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=591.345µs policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | group.consumer.min.heartbeat.interval.ms = 5000 policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-21T23:14:30.936984185Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | group.consumer.min.session.timeout.ms = 45000 policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-21T23:14:30.937909484Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=925.299µs policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-apex-pdp | sasl.oauthbearer.expected.issuer = null kafka | group.consumer.session.timeout.ms = 45000 policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:30.94161921Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | group.coordinator.new.enable = false policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-21T23:14:30.945862462Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.243512ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | group.coordinator.threads = 1 policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-21T23:14:30.950575208Z level=info msg="Executing migration" id="create temp_user v2" policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | group.initial.rebalance.delay.ms = 3000 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-21T23:14:30.951296525Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=721.027µs policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | group.max.session.timeout.ms = 1800000 policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-21T23:14:30.954965111Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | group.max.size = 2147483647 policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-21T23:14:30.955650178Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=684.727µs policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | group.min.session.timeout.ms = 6000 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-21T23:14:30.960282694Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | initial.broker.registration.timeout.ms = 60000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:30.961376414Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.09309ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | security.protocol = PLAINTEXT kafka | inter.broker.listener.name = PLAINTEXT policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-21T23:14:30.964846138Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" policy-db-migrator | -------------- policy-apex-pdp | security.providers = null kafka | inter.broker.protocol.version = 3.5-IV2 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-21T23:14:30.965924829Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.078551ms policy-db-migrator | policy-apex-pdp | send.buffer.bytes = 131072 kafka | kafka.metrics.polling.interval.secs = 10 policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-21T23:14:30.970642905Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" policy-db-migrator | policy-apex-pdp | session.timeout.ms = 45000 kafka | kafka.metrics.reporters = [] policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-21T23:14:30.971358622Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=715.447µs policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | leader.imbalance.check.interval.seconds = 300 policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-21T23:14:30.974646294Z level=info msg="Executing migration" id="copy temp_user v1 to v2" policy-db-migrator | -------------- policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | leader.imbalance.per.broker.percentage = 10 policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-21T23:14:30.975053358Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=406.984µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | ssl.cipher.suites = null kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT policy-pap | security.providers = null grafana | logger=migrator t=2024-01-21T23:14:30.978623343Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" policy-db-migrator | -------------- policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-21T23:14:30.979517922Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=900.949µs policy-db-migrator | policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | log.cleaner.backoff.ms = 15000 policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-01-21T23:14:30.984035206Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" policy-db-migrator | policy-apex-pdp | ssl.engine.factory.class = null kafka | log.cleaner.dedupe.buffer.size = 134217728 policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:30.984469701Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=434.274µs policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-apex-pdp | ssl.key.password = null kafka | log.cleaner.delete.retention.ms = 86400000 policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:30.987702052Z level=info msg="Executing migration" id="create star table" policy-db-migrator | -------------- policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | log.cleaner.enable = true policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-21T23:14:30.988401319Z level=info msg="Migration successfully executed" id="create star table" duration=694.207µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | log.cleaner.io.buffer.load.factor = 0.9 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-21T23:14:30.991643041Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.key = null kafka | log.cleaner.io.buffer.size = 524288 policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-21T23:14:30.992870843Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.226462ms policy-db-migrator | policy-apex-pdp | ssl.keystore.location = null kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-21T23:14:30.998382767Z level=info msg="Executing migration" id="create org table v1" policy-db-migrator | policy-apex-pdp | ssl.keystore.password = null kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-21T23:14:30.999382257Z level=info msg="Migration successfully executed" id="create org table v1" duration=999.23µs policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-apex-pdp | ssl.keystore.type = JKS kafka | log.cleaner.min.cleanable.ratio = 0.5 policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-21T23:14:31.004630748Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-db-migrator | -------------- policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | log.cleaner.min.compaction.lag.ms = 0 policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-21T23:14:31.005363606Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=732.158µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | ssl.provider = null kafka | log.cleaner.threads = 1 policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-21T23:14:31.008660078Z level=info msg="Executing migration" id="create org_user table v1" policy-apex-pdp | ssl.secure.random.implementation = null kafka | log.cleanup.policy = [delete] policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.009681998Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.02134ms policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | log.dir = /tmp/kafka-logs policy-pap | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.012876709Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" policy-apex-pdp | ssl.truststore.certificates = null kafka | log.dirs = /var/lib/kafka/data policy-pap | ssl.keystore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.0140897Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.212722ms policy-apex-pdp | ssl.truststore.location = null kafka | log.flush.interval.messages = 9223372036854775807 policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-apex-pdp | ssl.truststore.password = null policy-pap | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.026303009Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" kafka | log.flush.interval.ms = null policy-apex-pdp | ssl.truststore.type = JKS policy-pap | ssl.secure.random.implementation = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.028240257Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.942488ms kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.031852662Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" kafka | log.flush.scheduler.interval.ms = 9223372036854775807 policy-apex-pdp | policy-pap | ssl.truststore.certificates = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.03264063Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=787.638µs kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 policy-apex-pdp | [2024-01-21T23:15:05.127+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.035698849Z level=info msg="Executing migration" id="Update org table charset" kafka | log.index.interval.bytes = 4096 policy-pap | ssl.truststore.location = null policy-apex-pdp | [2024-01-21T23:15:05.127+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-21T23:14:31.03572684Z level=info msg="Migration successfully executed" id="Update org table charset" duration=28.371µs kafka | log.index.size.max.bytes = 10485760 policy-pap | ssl.truststore.password = null policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql grafana | logger=migrator t=2024-01-21T23:14:31.038722479Z level=info msg="Executing migration" id="Update org_user table charset" kafka | log.message.downconversion.enable = true policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:05.127+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878905125 grafana | logger=migrator t=2024-01-21T23:14:31.038747769Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=25.37µs kafka | log.message.format.version = 3.0-IV1 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-21T23:15:05.130+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-1, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-01-21T23:14:31.043313493Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-pap | policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:05.144+00:00|INFO|ServiceManager|main] service manager starting grafana | logger=migrator t=2024-01-21T23:14:31.043489085Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=188.872µs kafka | log.message.timestamp.type = CreateTime policy-pap | [2024-01-21T23:15:01.886+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.144+00:00|INFO|ServiceManager|main] service manager starting topics grafana | logger=migrator t=2024-01-21T23:14:31.047429013Z level=info msg="Executing migration" id="create dashboard table" kafka | log.preallocate = false policy-pap | [2024-01-21T23:15:01.887+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.150+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e43a1262-c2bd-4185-8b6c-0623a45ad046, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting grafana | logger=migrator t=2024-01-21T23:14:31.048468683Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.03906ms kafka | log.retention.bytes = -1 policy-pap | [2024-01-21T23:15:01.887+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878901884 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-apex-pdp | [2024-01-21T23:15:05.176+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-01-21T23:14:31.052113769Z level=info msg="Executing migration" id="add index dashboard.account_id" kafka | log.retention.check.interval.ms = 300000 policy-pap | [2024-01-21T23:15:01.890+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-1, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- policy-apex-pdp | allow.auto.create.topics = true grafana | logger=migrator t=2024-01-21T23:14:31.052881056Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=766.717µs kafka | log.retention.hours = 168 policy-pap | [2024-01-21T23:15:01.891+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-01-21T23:14:31.055890095Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" kafka | log.retention.minutes = null policy-pap | allow.auto.create.topics = true policy-db-migrator | -------------- policy-apex-pdp | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-01-21T23:14:31.056732833Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=839.488µs kafka | log.retention.ms = null policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | policy-apex-pdp | auto.offset.reset = latest grafana | logger=migrator t=2024-01-21T23:14:31.062189086Z level=info msg="Executing migration" id="create dashboard_tag table" kafka | log.roll.hours = 168 policy-pap | auto.include.jmx.reporter = true policy-db-migrator | policy-apex-pdp | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-01-21T23:14:31.063325047Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.135911ms kafka | log.roll.jitter.hours = 0 policy-pap | auto.offset.reset = latest policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-apex-pdp | check.crcs = true grafana | logger=migrator t=2024-01-21T23:14:31.067344485Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" kafka | log.roll.jitter.ms = null policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- policy-apex-pdp | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-01-21T23:14:31.068596637Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.251352ms kafka | log.roll.ms = null policy-pap | check.crcs = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | client.id = consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2 grafana | logger=migrator t=2024-01-21T23:14:31.072267323Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" kafka | log.segment.bytes = 1073741824 policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- policy-apex-pdp | client.rack = grafana | logger=migrator t=2024-01-21T23:14:31.074426394Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=2.158261ms kafka | log.segment.delete.delay.ms = 60000 policy-pap | client.id = consumer-policy-pap-2 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.079464232Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" kafka | max.connection.creation.rate = 2147483647 policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | client.rack = policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.086750393Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.284631ms kafka | max.connections = 2147483647 policy-apex-pdp | default.api.timeout.ms = 60000 policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql grafana | logger=migrator t=2024-01-21T23:14:31.090408109Z level=info msg="Executing migration" id="create dashboard v2" kafka | max.connections.per.ip = 2147483647 policy-apex-pdp | enable.auto.commit = true policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.091339718Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=931.439µs policy-apex-pdp | exclude.internal.topics = true policy-pap | enable.auto.commit = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | max.connections.per.ip.overrides = grafana | logger=migrator t=2024-01-21T23:14:31.094480558Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" policy-apex-pdp | fetch.max.bytes = 52428800 policy-pap | exclude.internal.topics = true policy-db-migrator | -------------- kafka | max.incremental.fetch.session.cache.slots = 1000 grafana | logger=migrator t=2024-01-21T23:14:31.095253105Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=772.527µs policy-apex-pdp | fetch.max.wait.ms = 500 policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | kafka | message.max.bytes = 1048588 grafana | logger=migrator t=2024-01-21T23:14:31.100512457Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" policy-apex-pdp | fetch.min.bytes = 1 policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | kafka | metadata.log.dir = null grafana | logger=migrator t=2024-01-21T23:14:31.101705208Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.192261ms policy-apex-pdp | group.id = e43a1262-c2bd-4185-8b6c-0623a45ad046 policy-pap | fetch.min.bytes = 1 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 grafana | logger=migrator t=2024-01-21T23:14:31.10497721Z level=info msg="Executing migration" id="copy dashboard v1 to v2" policy-apex-pdp | group.instance.id = null policy-pap | group.id = policy-pap policy-db-migrator | -------------- kafka | metadata.log.max.snapshot.interval.ms = 3600000 grafana | logger=migrator t=2024-01-21T23:14:31.105290663Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=313.293µs policy-apex-pdp | heartbeat.interval.ms = 3000 policy-pap | group.instance.id = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | metadata.log.segment.bytes = 1073741824 grafana | logger=migrator t=2024-01-21T23:14:31.108786286Z level=info msg="Executing migration" id="drop table dashboard_v1" policy-apex-pdp | interceptor.classes = [] policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- kafka | metadata.log.segment.min.bytes = 8388608 grafana | logger=migrator t=2024-01-21T23:14:31.109631305Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=845.259µs policy-apex-pdp | internal.leave.group.on.close = true policy-pap | interceptor.classes = [] policy-db-migrator | kafka | metadata.log.segment.ms = 604800000 grafana | logger=migrator t=2024-01-21T23:14:31.11427579Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | internal.leave.group.on.close = true policy-db-migrator | kafka | metadata.max.idle.interval.ms = 500 grafana | logger=migrator t=2024-01-21T23:14:31.11434043Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=65.14µs grafana | logger=migrator t=2024-01-21T23:14:31.116824335Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" policy-apex-pdp | isolation.level = read_uncommitted kafka | metadata.max.retention.bytes = 104857600 grafana | logger=migrator t=2024-01-21T23:14:31.118574941Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.750386ms policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | metadata.max.retention.ms = 604800000 grafana | logger=migrator t=2024-01-21T23:14:31.12259607Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | max.partition.fetch.bytes = 1048576 kafka | metric.reporters = [] grafana | logger=migrator t=2024-01-21T23:14:31.124937123Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.340353ms policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted policy-apex-pdp | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-21T23:14:31.128881891Z level=info msg="Executing migration" id="Add column gnetId in dashboard" kafka | metrics.num.samples = 2 policy-db-migrator | policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.poll.records = 500 grafana | logger=migrator t=2024-01-21T23:14:31.130671189Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.789008ms kafka | metrics.recording.level = INFO policy-db-migrator | policy-pap | max.partition.fetch.bytes = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-21T23:14:31.135231363Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" kafka | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-pap | max.poll.interval.ms = 300000 policy-apex-pdp | metric.reporters = [] grafana | logger=migrator t=2024-01-21T23:14:31.13601406Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=781.837µs kafka | min.insync.replicas = 1 policy-pap | max.poll.records = 500 policy-apex-pdp | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-21T23:14:31.139559195Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" kafka | node.id = 1 policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 policy-apex-pdp | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-21T23:14:31.142401652Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.844557ms kafka | num.io.threads = 8 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | metric.reporters = [] policy-apex-pdp | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:31.145589493Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" kafka | num.network.threads = 3 policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | num.partitions = 1 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.146377251Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=787.218µs policy-pap | metrics.recording.level = INFO policy-apex-pdp | receive.buffer.bytes = 65536 kafka | num.recovery.threads.per.data.dir = 1 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.150881414Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" policy-pap | metrics.sample.window.ms = 30000 policy-apex-pdp | reconnect.backoff.max.ms = 1000 kafka | num.replica.alter.log.dirs.threads = null policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql grafana | logger=migrator t=2024-01-21T23:14:31.151696372Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=810.628µs policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | reconnect.backoff.ms = 50 kafka | num.replica.fetchers = 1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.155355788Z level=info msg="Executing migration" id="Update dashboard table charset" policy-pap | receive.buffer.bytes = 65536 policy-apex-pdp | request.timeout.ms = 30000 kafka | offset.metadata.max.bytes = 4096 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.155398248Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=44.16µs policy-pap | reconnect.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 kafka | offsets.commit.required.acks = -1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.159374327Z level=info msg="Executing migration" id="Update dashboard_tag table charset" policy-pap | reconnect.backoff.ms = 50 policy-apex-pdp | sasl.client.callback.handler.class = null kafka | offsets.commit.timeout.ms = 5000 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.159416247Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=43.61µs policy-pap | request.timeout.ms = 30000 policy-apex-pdp | sasl.jaas.config = null kafka | offsets.load.buffer.size = 5242880 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.164308364Z level=info msg="Executing migration" id="Add column folder_id in dashboard" policy-pap | retry.backoff.ms = 100 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | offsets.retention.check.interval.ms = 600000 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql grafana | logger=migrator t=2024-01-21T23:14:31.167900829Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.950389ms policy-pap | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 kafka | offsets.retention.minutes = 10080 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.173024669Z level=info msg="Executing migration" id="Add column isFolder in dashboard" policy-pap | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.service.name = null kafka | offsets.topic.compression.codec = 0 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.174856517Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.831918ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | offsets.topic.num.partitions = 50 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.178142019Z level=info msg="Executing migration" id="Add column has_acl in dashboard" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | offsets.topic.replication.factor = 1 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.180067687Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.898758ms policy-pap | sasl.kerberos.service.name = null policy-apex-pdp | sasl.login.callback.handler.class = null kafka | offsets.topic.segment.bytes = 104857600 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.185134156Z level=info msg="Executing migration" id="Add column uid in dashboard" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.login.class = null kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-01-21T23:14:31.187141565Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.006479ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.connect.timeout.ms = null kafka | password.encoder.iterations = 4096 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.191307586Z level=info msg="Executing migration" id="Update uid column values in dashboard" policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.read.timeout.ms = null kafka | password.encoder.key.length = 128 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.191492738Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=182.692µs policy-pap | sasl.login.class = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 kafka | password.encoder.keyfactory.algorithm = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.195297385Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 kafka | password.encoder.old.secret = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.196156343Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=856.728µs policy-pap | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 kafka | password.encoder.secret = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.201586165Z level=info msg="Executing migration" id="Remove unique index org_id_slug" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-01-21T23:14:31.202678066Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.091391ms policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | process.roles = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.205952388Z level=info msg="Executing migration" id="Update dashboard title length" policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 kafka | producer.id.expiration.check.interval.ms = 600000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.205979398Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=27.16µs policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.mechanism = GSSAPI kafka | producer.id.expiration.ms = 86400000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.208433342Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | producer.purgatory.purge.interval.requests = 1000 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.209193949Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=759.777µs policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | queued.max.request.bytes = -1 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.214512671Z level=info msg="Executing migration" id="create dashboard_provisioning" policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql grafana | logger=migrator t=2024-01-21T23:14:31.215187217Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=674.106µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | queued.max.requests = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.218239857Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | quota.window.num = 11 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.228459726Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=10.217719ms policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | quota.window.size.seconds = 1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.231919079Z level=info msg="Executing migration" id="create dashboard_provisioning v2" kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.232412534Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=495.055µs policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | remote.log.manager.task.interval.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.235706746Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql grafana | logger=migrator t=2024-01-21T23:14:31.236415203Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=707.857µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.240849006Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | remote.log.manager.task.retry.jitter = 0.2 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.241634404Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=784.638µs policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | security.protocol = PLAINTEXT kafka | remote.log.manager.thread.pool.size = 10 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.244949346Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | security.providers = null kafka | remote.log.metadata.manager.class.name = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.2454007Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=451.184µs policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | send.buffer.bytes = 131072 kafka | remote.log.metadata.manager.class.path = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.249845313Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" policy-pap | security.protocol = PLAINTEXT policy-apex-pdp | session.timeout.ms = 45000 kafka | remote.log.metadata.manager.impl.prefix = null policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql grafana | logger=migrator t=2024-01-21T23:14:31.250679371Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=831.078µs policy-pap | security.providers = null policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | remote.log.metadata.manager.listener.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.255643479Z level=info msg="Executing migration" id="Add check_sum column" policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | remote.log.reader.max.pending.tasks = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.257628968Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.987829ms policy-pap | session.timeout.ms = 45000 policy-apex-pdp | ssl.cipher.suites = null kafka | remote.log.reader.threads = 10 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.260738758Z level=info msg="Executing migration" id="Add index for dashboard_title" policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | remote.log.storage.manager.class.name = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.261513326Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=773.488µs policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | remote.log.storage.manager.class.path = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.264536465Z level=info msg="Executing migration" id="delete tags for deleted dashboards" policy-pap | ssl.cipher.suites = null policy-apex-pdp | ssl.engine.factory.class = null kafka | remote.log.storage.manager.impl.prefix = null policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql grafana | logger=migrator t=2024-01-21T23:14:31.264711117Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=174.242µs policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.key.password = null kafka | remote.log.storage.system.enable = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.269281421Z level=info msg="Executing migration" id="delete stars for deleted dashboards" policy-pap | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | replica.fetch.backoff.ms = 1000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.269552614Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=270.503µs policy-pap | ssl.engine.factory.class = null policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | replica.fetch.max.bytes = 1048576 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.272863996Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" policy-pap | ssl.key.password = null policy-apex-pdp | ssl.keystore.key = null kafka | replica.fetch.min.bytes = 1 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.274062088Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.197492ms policy-pap | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.location = null kafka | replica.fetch.response.max.bytes = 10485760 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.277649832Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.password = null kafka | replica.fetch.wait.max.ms = 500 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql grafana | logger=migrator t=2024-01-21T23:14:31.279915174Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.265242ms policy-pap | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.type = JKS kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.284489499Z level=info msg="Executing migration" id="create data_source table" policy-pap | ssl.keystore.location = null policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | replica.lag.time.max.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.285337307Z level=info msg="Migration successfully executed" id="create data_source table" duration=847.098µs policy-pap | ssl.keystore.password = null policy-apex-pdp | ssl.provider = null kafka | replica.selector.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.289180274Z level=info msg="Executing migration" id="add index data_source.account_id" policy-pap | ssl.keystore.type = JKS policy-apex-pdp | ssl.secure.random.implementation = null kafka | replica.socket.receive.buffer.bytes = 65536 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.289954742Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=770.467µs policy-pap | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | replica.socket.timeout.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.293295354Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" policy-pap | ssl.provider = null policy-apex-pdp | ssl.truststore.certificates = null kafka | replication.quota.window.num = 11 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql grafana | logger=migrator t=2024-01-21T23:14:31.294093772Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=797.777µs policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | ssl.truststore.location = null kafka | replication.quota.window.size.seconds = 1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.299526024Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.password = null kafka | request.timeout.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) grafana | logger=migrator t=2024-01-21T23:14:31.300273272Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=746.658µs policy-pap | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.type = JKS kafka | reserved.broker.max.id = 1000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.303462622Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" policy-pap | ssl.truststore.location = null policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | sasl.client.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.304329221Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=864.819µs policy-pap | ssl.truststore.password = null policy-apex-pdp | kafka | sasl.enabled.mechanisms = [GSSAPI] policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.307521632Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" policy-pap | ssl.truststore.type = JKS policy-apex-pdp | [2024-01-21T23:15:05.187+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | sasl.jaas.config = null policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql grafana | logger=migrator t=2024-01-21T23:14:31.317112475Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.590643ms policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | [2024-01-21T23:15:05.187+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.321967622Z level=info msg="Executing migration" id="create data_source table v2" policy-pap | policy-apex-pdp | [2024-01-21T23:15:05.187+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878905187 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.322743039Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=774.647µs policy-pap | [2024-01-21T23:15:01.898+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | [2024-01-21T23:15:05.188+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.32595031Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" policy-pap | [2024-01-21T23:15:01.898+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-apex-pdp | [2024-01-21T23:15:05.189+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0644ab6b-245c-4d68-8e2b-62e7f136f852, alive=false, publisher=null]]: starting policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.326742208Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=791.268µs policy-pap | [2024-01-21T23:15:01.898+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878901898 kafka | sasl.kerberos.service.name = null policy-apex-pdp | [2024-01-21T23:15:05.202+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.33002589Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" policy-pap | [2024-01-21T23:15:01.898+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | acks = -1 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql grafana | logger=migrator t=2024-01-21T23:14:31.330828648Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=802.328µs kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | auto.include.jmx.reporter = true policy-pap | [2024-01-21T23:15:02.249+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.335795246Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" policy-apex-pdp | batch.size = 16384 policy-pap | [2024-01-21T23:15:02.472+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.336563243Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=763.337µs kafka | sasl.login.callback.handler.class = null policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-pap | [2024-01-21T23:15:02.767+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6fafbdac, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@c7c07ff, org.springframework.security.web.context.SecurityContextHolderFilter@5dc120ab, org.springframework.security.web.header.HeaderWriterFilter@750c23a3, org.springframework.security.web.authentication.logout.LogoutFilter@581d5b33, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3909308c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7ef7f414, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4c3d72fd, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@d271d6c, org.springframework.security.web.access.ExceptionTranslationFilter@5bf1b528, org.springframework.security.web.access.intercept.AuthorizationFilter@90394d] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.340581412Z level=info msg="Executing migration" id="Add column with_credentials" kafka | sasl.login.class = null policy-apex-pdp | buffer.memory = 33554432 policy-pap | [2024-01-21T23:15:03.668+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.34450973Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.928408ms kafka | sasl.login.connect.timeout.ms = null policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | [2024-01-21T23:15:03.737+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.347889493Z level=info msg="Executing migration" id="Add secure json data column" kafka | sasl.login.read.timeout.ms = null policy-apex-pdp | client.id = producer-1 policy-pap | [2024-01-21T23:15:03.779+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql grafana | logger=migrator t=2024-01-21T23:14:31.350161155Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.271212ms kafka | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | compression.type = none policy-pap | [2024-01-21T23:15:03.800+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.355054302Z level=info msg="Executing migration" id="Update data_source table charset" kafka | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | [2024-01-21T23:15:03.800+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-21T23:14:31.355146133Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=93.141µs kafka | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | delivery.timeout.ms = 120000 policy-pap | [2024-01-21T23:15:03.801+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.358487675Z level=info msg="Executing migration" id="Update initial version to 1" kafka | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | enable.idempotence = true policy-pap | [2024-01-21T23:15:03.802+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener grafana | logger=migrator t=2024-01-21T23:14:31.358784488Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=297.033µs policy-apex-pdp | interceptor.classes = [] policy-pap | [2024-01-21T23:15:03.802+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-db-migrator | kafka | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:31.362690256Z level=info msg="Executing migration" id="Add read_only data column" policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | [2024-01-21T23:15:03.802+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-db-migrator | kafka | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | linger.ms = 0 policy-pap | [2024-01-21T23:15:03.802+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql kafka | sasl.mechanism.controller.protocol = GSSAPI grafana | logger=migrator t=2024-01-21T23:14:31.365642595Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.953149ms policy-apex-pdp | max.block.ms = 60000 policy-db-migrator | -------------- kafka | sasl.mechanism.inter.broker.protocol = GSSAPI grafana | logger=migrator t=2024-01-21T23:14:31.37034191Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" policy-pap | [2024-01-21T23:15:03.809+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0096ba3d-86d0-4a50-8361-ec89b03a0194, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5c65fa69 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-21T23:14:31.370516102Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=173.452µs policy-pap | [2024-01-21T23:15:03.822+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0096ba3d-86d0-4a50-8361-ec89b03a0194, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-apex-pdp | max.request.size = 1048576 policy-db-migrator | -------------- kafka | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-21T23:14:31.373725893Z level=info msg="Executing migration" id="Update json_data with nulls" policy-pap | [2024-01-21T23:15:03.823+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | metadata.max.age.ms = 300000 policy-db-migrator | kafka | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-21T23:14:31.373882415Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=156.792µs policy-pap | allow.auto.create.topics = true policy-apex-pdp | metadata.max.idle.ms = 300000 policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-21T23:14:31.376262128Z level=info msg="Executing migration" id="Add uid column" policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | metric.reporters = [] policy-db-migrator | > upgrade 0450-pdpgroup.sql kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:31.37852071Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.257882ms policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | metrics.num.samples = 2 policy-db-migrator | -------------- kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-21T23:14:31.381993893Z level=info msg="Executing migration" id="Update uid value" policy-pap | auto.offset.reset = latest policy-apex-pdp | metrics.recording.level = INFO policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) kafka | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-21T23:14:31.382176315Z level=info msg="Migration successfully executed" id="Update uid value" duration=183.782µs policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- kafka | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-21T23:14:31.38685852Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" policy-pap | check.crcs = true policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-db-migrator | kafka | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-21T23:14:31.389473645Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=2.612445ms policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-db-migrator | kafka | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-21T23:14:31.394725917Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" policy-pap | client.id = consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3 policy-apex-pdp | partitioner.class = null policy-db-migrator | > upgrade 0460-pdppolicystatus.sql kafka | sasl.server.callback.handler.class = null grafana | logger=migrator t=2024-01-21T23:14:31.396030379Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.303653ms policy-pap | client.rack = policy-apex-pdp | partitioner.ignore.keys = false policy-db-migrator | -------------- kafka | sasl.server.max.receive.size = 524288 grafana | logger=migrator t=2024-01-21T23:14:31.399545683Z level=info msg="Executing migration" id="create api_key table" policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | receive.buffer.bytes = 32768 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | security.inter.broker.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-21T23:14:31.40030916Z level=info msg="Migration successfully executed" id="create api_key table" duration=765.157µs policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- kafka | security.providers = null grafana | logger=migrator t=2024-01-21T23:14:31.405995366Z level=info msg="Executing migration" id="add index api_key.account_id" policy-pap | enable.auto.commit = true policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | kafka | server.max.startup.time.ms = 9223372036854775807 grafana | logger=migrator t=2024-01-21T23:14:31.407206728Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.210751ms policy-pap | exclude.internal.topics = true policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | kafka | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:31.410784912Z level=info msg="Executing migration" id="add index api_key.key" policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | retries = 2147483647 policy-db-migrator | > upgrade 0470-pdp.sql kafka | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:31.412150685Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.366173ms policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | socket.listen.backlog.size = 50 grafana | logger=migrator t=2024-01-21T23:14:31.415892731Z level=info msg="Executing migration" id="add index api_key.account_id_name" policy-pap | fetch.min.bytes = 1 policy-apex-pdp | sasl.client.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | socket.receive.buffer.bytes = 102400 policy-pap | group.id = 0096ba3d-86d0-4a50-8361-ec89b03a0194 policy-db-migrator | -------------- kafka | socket.request.max.bytes = 104857600 grafana | logger=migrator t=2024-01-21T23:14:31.41672954Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=836.269µs policy-apex-pdp | sasl.jaas.config = null policy-pap | group.instance.id = null policy-db-migrator | kafka | socket.send.buffer.bytes = 102400 grafana | logger=migrator t=2024-01-21T23:14:31.423570756Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | heartbeat.interval.ms = 3000 kafka | ssl.cipher.suites = [] grafana | logger=migrator t=2024-01-21T23:14:31.424369304Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=798.528µs policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | ssl.client.auth = none grafana | logger=migrator t=2024-01-21T23:14:31.427759646Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | interceptor.classes = [] policy-db-migrator | > upgrade 0480-pdpstatistics.sql kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-21T23:14:31.428843537Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.082291ms policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- kafka | ssl.endpoint.identification.algorithm = https policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-21T23:14:31.434502502Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-apex-pdp | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-21T23:14:31.435705604Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.202182ms policy-pap | isolation.level = read_uncommitted kafka | ssl.engine.factory.class = null policy-apex-pdp | sasl.login.class = null grafana | logger=migrator t=2024-01-21T23:14:31.439680922Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" kafka | ssl.key.password = null policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-21T23:14:31.448551088Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.868756ms kafka | ssl.keymanager.algorithm = SunX509 policy-db-migrator | policy-pap | max.partition.fetch.bytes = 1048576 policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-21T23:14:31.452473876Z level=info msg="Executing migration" id="create api_key table v2" kafka | ssl.keystore.certificate.chain = null policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-21T23:14:31.453140062Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=665.956µs kafka | ssl.keystore.key = null policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-pap | max.poll.records = 500 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-21T23:14:31.457493674Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" kafka | ssl.keystore.location = null policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-21T23:14:31.458258752Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=764.258µs policy-pap | metric.reporters = [] policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | ssl.keystore.password = null grafana | logger=migrator t=2024-01-21T23:14:31.461644925Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- kafka | ssl.keystore.type = JKS policy-pap | metrics.num.samples = 2 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-db-migrator | kafka | ssl.principal.mapping.rules = DEFAULT policy-pap | metrics.recording.level = INFO policy-db-migrator | kafka | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-21T23:14:31.462818506Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.173131ms policy-apex-pdp | sasl.mechanism = GSSAPI kafka | ssl.provider = null grafana | logger=migrator t=2024-01-21T23:14:31.466392301Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-21T23:14:31.467696413Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.302212ms policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-21T23:14:31.472036855Z level=info msg="Executing migration" id="copy api_key v1 to v2" policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | sasl.oauthbearer.expected.issuer = null kafka | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-21T23:14:31.472400449Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=363.334µs policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | ssl.truststore.location = null grafana | logger=migrator t=2024-01-21T23:14:31.475896913Z level=info msg="Executing migration" id="Drop old table api_key_v1" policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | ssl.truststore.password = null grafana | logger=migrator t=2024-01-21T23:14:31.476430958Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=535.575µs policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | ssl.truststore.type = JKS policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:31.480679079Z level=info msg="Executing migration" id="Update api_key table charset" policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-21T23:14:31.48071709Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=38.751µs policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | transaction.max.timeout.ms = 900000 policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-21T23:14:31.485626227Z level=info msg="Executing migration" id="Add expires to api_key table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-21T23:14:31.489609446Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.982329ms policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | transaction.state.log.load.buffer.size = 5242880 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-21T23:14:31.493449003Z level=info msg="Executing migration" id="Add service account foreign key" policy-db-migrator | kafka | transaction.state.log.min.isr = 2 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-21T23:14:31.496937407Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.488234ms policy-db-migrator | kafka | transaction.state.log.num.partitions = 50 policy-pap | sasl.kerberos.service.name = null policy-apex-pdp | security.providers = null grafana | logger=migrator t=2024-01-21T23:14:31.50038051Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql kafka | transaction.state.log.replication.factor = 3 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-21T23:14:31.500546162Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=165.552µs policy-db-migrator | -------------- kafka | transaction.state.log.segment.bytes = 104857600 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:31.503766133Z level=info msg="Executing migration" id="Add last_used_at to api_key table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) kafka | transactional.id.expiration.ms = 604800000 policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:31.506148546Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.382103ms policy-db-migrator | -------------- kafka | unclean.leader.election.enable = false policy-pap | sasl.login.class = null policy-apex-pdp | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-21T23:14:31.510844441Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-db-migrator | kafka | unstable.api.versions.enable = false policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-21T23:14:31.513434657Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.584125ms policy-db-migrator | kafka | zookeeper.clientCnxnSocket = null grafana | logger=migrator t=2024-01-21T23:14:31.517580857Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | zookeeper.connect = zookeeper:2181 grafana | logger=migrator t=2024-01-21T23:14:31.518272363Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=690.726µs policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- policy-apex-pdp | ssl.engine.factory.class = null kafka | zookeeper.connection.timeout.ms = null grafana | logger=migrator t=2024-01-21T23:14:31.52205278Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | ssl.key.password = null kafka | zookeeper.max.in.flight.requests = 10 grafana | logger=migrator t=2024-01-21T23:14:31.522581605Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=528.525µs policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | zookeeper.metadata.migration.enable = false grafana | logger=migrator t=2024-01-21T23:14:31.527363411Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | zookeeper.session.timeout.ms = 18000 grafana | logger=migrator t=2024-01-21T23:14:31.528538993Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.179552ms policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | policy-apex-pdp | ssl.keystore.key = null kafka | zookeeper.set.acl = false grafana | logger=migrator t=2024-01-21T23:14:31.53243036Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-apex-pdp | ssl.keystore.location = null kafka | zookeeper.ssl.cipher.suites = null grafana | logger=migrator t=2024-01-21T23:14:31.533644752Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.213962ms policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.password = null kafka | zookeeper.ssl.client.enable = false grafana | logger=migrator t=2024-01-21T23:14:31.538415119Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-apex-pdp | ssl.keystore.type = JKS kafka | zookeeper.ssl.crl.enable = false grafana | logger=migrator t=2024-01-21T23:14:31.539944403Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.529114ms policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-01-21T23:14:31.543636489Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | policy-apex-pdp | ssl.provider = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-01-21T23:14:31.545242875Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.609106ms policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | policy-apex-pdp | ssl.secure.random.implementation = null kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-01-21T23:14:31.549010051Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-01-21T23:14:31.549130732Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=112.321µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-21T23:14:31.556331532Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" kafka | zookeeper.ssl.keystore.type = null policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-apex-pdp | ssl.truststore.location = null grafana | logger=migrator t=2024-01-21T23:14:31.556371253Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=40.541µs kafka | zookeeper.ssl.ocsp.enable = false policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.password = null grafana | logger=migrator t=2024-01-21T23:14:31.560930557Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" kafka | zookeeper.ssl.protocol = TLSv1.2 policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-21T23:14:31.563813734Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.882367ms kafka | zookeeper.ssl.truststore.location = null policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-01-21T23:14:31.567007655Z level=info msg="Executing migration" id="Add encrypted dashboard json column" kafka | zookeeper.ssl.truststore.password = null policy-db-migrator | policy-apex-pdp | transactional.id = null kafka | zookeeper.ssl.truststore.type = null policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-21T23:14:31.569768022Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.754607ms kafka | (kafka.server.KafkaConfig) policy-apex-pdp | policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | security.providers = null grafana | logger=migrator t=2024-01-21T23:14:31.575119294Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" kafka | [2024-01-21 23:14:32,158] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-01-21T23:15:05.225+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-21T23:14:31.575249685Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=83.991µs kafka | [2024-01-21 23:14:32,160] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-01-21T23:15:05.245+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-01-21T23:14:31.578390416Z level=info msg="Executing migration" id="create quota table v1" kafka | [2024-01-21 23:14:32,163] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.245+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:31.579139393Z level=info msg="Migration successfully executed" id="create quota table v1" duration=749.207µs kafka | [2024-01-21 23:14:32,168] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.245+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878905245 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-01-21 23:14:32,195] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:31.583249183Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-apex-pdp | [2024-01-21T23:15:05.246+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0644ab6b-245c-4d68-8e2b-62e7f136f852, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-01-21 23:14:32,200] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-21T23:14:31.584660016Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.405063ms policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:05.246+00:00|INFO|ServiceManager|main] service manager starting set alive kafka | [2024-01-21 23:14:32,210] INFO Loaded 0 logs in 15ms (kafka.log.LogManager) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-21T23:14:31.588438923Z level=info msg="Executing migration" id="Update quota table charset" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-apex-pdp | [2024-01-21T23:15:05.246+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object kafka | [2024-01-21 23:14:32,213] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-21T23:14:31.588479484Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.811µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:05.251+00:00|INFO|ServiceManager|main] service manager starting topic sinks kafka | [2024-01-21 23:14:32,214] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-21T23:14:31.592903996Z level=info msg="Executing migration" id="create plugin_setting table" policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.252+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher kafka | [2024-01-21 23:14:32,228] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-21T23:14:31.594081898Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.177662ms policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.260+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener kafka | [2024-01-21 23:14:32,281] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-21T23:14:31.597755393Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-apex-pdp | [2024-01-21T23:15:05.260+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher kafka | [2024-01-21 23:14:32,304] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-21T23:14:31.598678042Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=921.659µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:05.260+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher kafka | [2024-01-21 23:14:32,316] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-21T23:14:31.602233517Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-apex-pdp | [2024-01-21T23:15:05.260+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e43a1262-c2bd-4185-8b6c-0623a45ad046, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 kafka | [2024-01-21 23:14:32,357] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-21T23:14:31.605443778Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.208981ms policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:05.261+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e43a1262-c2bd-4185-8b6c-0623a45ad046, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted kafka | [2024-01-21 23:14:32,702] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-21T23:14:31.609489107Z level=info msg="Executing migration" id="Update plugin_setting table charset" policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.261+00:00|INFO|ServiceManager|main] service manager starting Create REST server kafka | [2024-01-21 23:14:32,732] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-21T23:14:31.609516367Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=28.64µs policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.281+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: kafka | [2024-01-21 23:14:32,733] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-21T23:14:31.612067342Z level=info msg="Executing migration" id="create session table" policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-apex-pdp | [] kafka | [2024-01-21 23:14:32,740] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-01-21T23:14:31.61292196Z level=info msg="Migration successfully executed" id="create session table" duration=846.688µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:05.283+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-01-21 23:14:32,745] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-21T23:14:31.616408244Z level=info msg="Executing migration" id="Drop old table playlist table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ab3de409-b2b8-4395-82ea-8036f980806d","timestampMs":1705878905260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} kafka | [2024-01-21 23:14:32,765] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-21T23:14:31.616529665Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=122.091µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:05.511+00:00|INFO|ServiceManager|main] service manager starting Rest Server kafka | [2024-01-21 23:14:32,767] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-21T23:14:31.62216354Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-db-migrator | policy-apex-pdp | [2024-01-21T23:15:05.511+00:00|INFO|ServiceManager|main] service manager starting kafka | [2024-01-21 23:14:32,769] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-01-21T23:14:31.622291471Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=128.741µs policy-apex-pdp | [2024-01-21T23:15:05.511+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-01-21T23:14:31.626014717Z level=info msg="Executing migration" id="create playlist table v2" policy-db-migrator | kafka | [2024-01-21 23:14:32,771] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | [2024-01-21T23:15:05.511+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-21T23:14:31.627062707Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.04739ms policy-db-migrator | > upgrade 0600-toscanodetemplate.sql kafka | [2024-01-21 23:14:32,784] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-apex-pdp | [2024-01-21T23:15:05.524+00:00|INFO|ServiceManager|main] service manager started policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-21T23:14:31.631121277Z level=info msg="Executing migration" id="create playlist item table v2" policy-db-migrator | -------------- kafka | [2024-01-21 23:14:32,819] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-apex-pdp | [2024-01-21T23:15:05.524+00:00|INFO|ServiceManager|main] service manager started policy-pap | grafana | logger=migrator t=2024-01-21T23:14:31.63244618Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.324632ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) kafka | [2024-01-21 23:14:32,866] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705878872850,1705878872850,1,0,0,72057610932846593,258,0,27 policy-apex-pdp | [2024-01-21T23:15:05.524+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-pap | [2024-01-21T23:15:03.829+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-21T23:14:31.635909843Z level=info msg="Executing migration" id="Update playlist table charset" policy-db-migrator | -------------- kafka | (kafka.zk.KafkaZkClient) policy-pap | [2024-01-21T23:15:03.829+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-21T23:15:05.524+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-01-21T23:14:31.635946253Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=37.56µs policy-db-migrator | kafka | [2024-01-21 23:14:32,867] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-pap | [2024-01-21T23:15:03.829+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878903829 policy-apex-pdp | [2024-01-21T23:15:05.641+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Cluster ID: -jrszSKtSKq5TnXDeh3xeA policy-db-migrator | policy-pap | [2024-01-21T23:15:03.829+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-21T23:15:05.641+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: -jrszSKtSKq5TnXDeh3xeA grafana | logger=migrator t=2024-01-21T23:14:31.640495617Z level=info msg="Executing migration" id="Update playlist_item table charset" kafka | [2024-01-21 23:14:32,937] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-apex-pdp | [2024-01-21T23:15:05.642+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 grafana | logger=migrator t=2024-01-21T23:14:31.640521478Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=31.11µs kafka | [2024-01-21 23:14:32,944] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | [2024-01-21T23:15:03.831+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.643703638Z level=info msg="Executing migration" id="Add playlist column created_at" kafka | [2024-01-21 23:14:32,951] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | [2024-01-21T23:15:03.831+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=988c2327-3928-4e75-b348-c4ca60151503, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7bb86ac policy-apex-pdp | [2024-01-21T23:15:05.643+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=migrator t=2024-01-21T23:14:31.648148472Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.444004ms kafka | [2024-01-21 23:14:32,951] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | [2024-01-21T23:15:03.831+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=988c2327-3928-4e75-b348-c4ca60151503, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-apex-pdp | [2024-01-21T23:15:05.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] (Re-)joining group policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) grafana | logger=migrator t=2024-01-21T23:14:31.651154591Z level=info msg="Executing migration" id="Add playlist column updated_at" kafka | [2024-01-21 23:14:32,966] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-01-21T23:15:03.831+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | [2024-01-21T23:15:05.664+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Request joining group due to: need to re-join with the given member-id: consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.654505623Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.351492ms kafka | [2024-01-21 23:14:32,969] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-pap | allow.auto.create.topics = true policy-apex-pdp | [2024-01-21T23:15:05.664+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.658900145Z level=info msg="Executing migration" id="drop preferences table v2" kafka | [2024-01-21 23:14:32,977] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | [2024-01-21T23:15:05.664+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] (Re-)joining group policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.659002926Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=103.931µs kafka | [2024-01-21 23:14:32,986] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | [2024-01-21T23:15:06.172+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql grafana | logger=migrator t=2024-01-21T23:14:31.662296539Z level=info msg="Executing migration" id="drop preferences table v3" kafka | [2024-01-21 23:14:32,990] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-pap | auto.offset.reset = latest policy-apex-pdp | [2024-01-21T23:15:06.172+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.662379329Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=83.341µs kafka | [2024-01-21 23:14:32,996] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | [2024-01-21T23:15:08.670+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Successfully joined group with generation Generation{generationId=1, memberId='consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b', protocol='range'} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-21T23:14:31.665929304Z level=info msg="Executing migration" id="create preferences table v3" kafka | [2024-01-21 23:14:33,003] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | check.crcs = true policy-apex-pdp | [2024-01-21T23:15:08.677+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Finished assignment for group at generation 1: {consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.667084035Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.156081ms kafka | [2024-01-21 23:14:33,013] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | [2024-01-21T23:15:08.687+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Successfully synced group in generation Generation{generationId=1, memberId='consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b', protocol='range'} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.672528458Z level=info msg="Executing migration" id="Update preferences table charset" kafka | [2024-01-21 23:14:33,013] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | client.id = consumer-policy-pap-4 policy-apex-pdp | [2024-01-21T23:15:08.687+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.672569698Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=37.221µs kafka | [2024-01-21 23:14:33,035] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) policy-pap | client.rack = policy-apex-pdp | [2024-01-21T23:15:08.688+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | > upgrade 0630-toscanodetype.sql grafana | logger=migrator t=2024-01-21T23:14:31.678500985Z level=info msg="Executing migration" id="Add column team_id in preferences" kafka | [2024-01-21 23:14:33,035] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | [2024-01-21T23:15:08.697+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.682532295Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.03157ms kafka | [2024-01-21 23:14:33,052] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | [2024-01-21T23:15:08.708+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) grafana | logger=migrator t=2024-01-21T23:14:31.685905787Z level=info msg="Executing migration" id="Update team_id column values in preferences" policy-pap | enable.auto.commit = true policy-apex-pdp | [2024-01-21T23:15:25.261+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.686066399Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=163.892µs kafka | [2024-01-21 23:14:33,057] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-pap | exclude.internal.topics = true policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"317323ab-8653-4275-bd16-05c52ce9a052","timestampMs":1705878925260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.691088068Z level=info msg="Executing migration" id="Add column week_start in preferences" kafka | [2024-01-21 23:14:33,061] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | [2024-01-21T23:15:25.289+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.694201018Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.11712ms kafka | [2024-01-21 23:14:33,062] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"317323ab-8653-4275-bd16-05c52ce9a052","timestampMs":1705878925260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} policy-db-migrator | > upgrade 0640-toscanodetypes.sql grafana | logger=migrator t=2024-01-21T23:14:31.699764471Z level=info msg="Executing migration" id="Add column preferences.json_data" kafka | [2024-01-21 23:14:33,088] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-pap | fetch.min.bytes = 1 policy-apex-pdp | [2024-01-21T23:15:25.293+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.702110444Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.346083ms kafka | [2024-01-21 23:14:33,095] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-pap | group.id = policy-pap policy-apex-pdp | [2024-01-21T23:15:25.444+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) grafana | logger=migrator t=2024-01-21T23:14:31.70686502Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" kafka | [2024-01-21 23:14:33,095] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-pap | group.instance.id = null policy-apex-pdp | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.706982522Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=117.281µs kafka | [2024-01-21 23:14:33,105] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-pap | heartbeat.interval.ms = 3000 policy-apex-pdp | [2024-01-21T23:15:25.455+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.710068861Z level=info msg="Executing migration" id="Add preferences index org_id" kafka | [2024-01-21 23:14:33,119] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-pap | interceptor.classes = [] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"15fcabe4-fb3e-47f6-b4c1-43b4541365cb","timestampMs":1705878925454,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.710838109Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=769.158µs kafka | [2024-01-21 23:14:33,121] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-pap | internal.leave.group.on.close = true policy-apex-pdp | [2024-01-21T23:15:25.456+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql grafana | logger=migrator t=2024-01-21T23:14:31.716489933Z level=info msg="Executing migration" id="Add preferences index user_id" kafka | [2024-01-21 23:14:33,121] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | [2024-01-21T23:15:25.459+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.71719723Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=707.507µs kafka | [2024-01-21 23:14:33,121] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c3d46f28-2b6f-4c92-8ce2-04ffb23d1149","timestampMs":1705878925459,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-21T23:14:31.719986087Z level=info msg="Executing migration" id="create alert table v1" policy-pap | isolation.level = read_uncommitted kafka | [2024-01-21 23:14:33,122] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-apex-pdp | [2024-01-21T23:15:25.470+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-21T23:14:31.720813195Z level=info msg="Migration successfully executed" id="create alert table v1" duration=826.438µs kafka | [2024-01-21 23:14:33,125] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"15fcabe4-fb3e-47f6-b4c1-43b4541365cb","timestampMs":1705878925454,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} policy-db-migrator | policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-21T23:14:31.725610222Z level=info msg="Executing migration" id="add index alert org_id & id " kafka | [2024-01-21 23:14:33,126] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-apex-pdp | [2024-01-21T23:15:25.470+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-21T23:14:31.727410439Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.805477ms kafka | [2024-01-21 23:14:33,127] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-apex-pdp | [2024-01-21T23:15:25.474+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-21T23:14:31.731215286Z level=info msg="Executing migration" id="add index alert state" kafka | [2024-01-21 23:14:33,127] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c3d46f28-2b6f-4c92-8ce2-04ffb23d1149","timestampMs":1705878925459,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-21T23:14:31.732122045Z level=info msg="Migration successfully executed" id="add index alert state" duration=906.709µs kafka | [2024-01-21 23:14:33,131] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | [2024-01-21T23:15:25.474+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-21T23:14:31.73681988Z level=info msg="Executing migration" id="add index alert dashboard_id" kafka | [2024-01-21 23:14:33,132] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-apex-pdp | [2024-01-21T23:15:25.509+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-21T23:14:31.738141853Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.321793ms kafka | [2024-01-21 23:14:33,132] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-db-migrator | policy-apex-pdp | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-21T23:14:31.742133762Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" kafka | [2024-01-21 23:14:33,134] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-apex-pdp | [2024-01-21T23:15:25.512+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:31.743125662Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=991.7µs kafka | [2024-01-21 23:14:33,144] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"52c4938e-a200-46d0-81f3-21a9a4d3de9b","timestampMs":1705878925511,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-01-21T23:14:31.756114947Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" kafka | [2024-01-21 23:14:33,161] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | [2024-01-21T23:15:25.519+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-01-21T23:14:31.757082307Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=962.88µs kafka | [2024-01-21 23:14:33,161] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"52c4938e-a200-46d0-81f3-21a9a4d3de9b","timestampMs":1705878925511,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-21T23:14:31.761659281Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" kafka | [2024-01-21 23:14:33,161] INFO Kafka startTimeMs: 1705878873147 (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | [2024-01-21T23:15:25.519+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-21T23:14:31.762927703Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.274302ms kafka | [2024-01-21 23:14:33,163] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-apex-pdp | [2024-01-21T23:15:25.535+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:31.766340636Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" kafka | [2024-01-21 23:14:33,164] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"04a1ac11-bc72-4cab-ab24-e9132afd087a","timestampMs":1705878925515,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-21T23:14:31.783431982Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=17.088946ms policy-apex-pdp | [2024-01-21T23:15:25.536+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-21T23:14:31.790838454Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" kafka | [2024-01-21 23:14:33,165] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"04a1ac11-bc72-4cab-ab24-e9132afd087a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2ab7fd29-16ad-4d9b-982a-342f9d03040b","timestampMs":1705878925536,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-21T23:14:31.791402719Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=563.625µs kafka | [2024-01-21 23:14:33,177] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | [2024-01-21T23:15:25.543+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-21T23:14:31.794491449Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" kafka | [2024-01-21 23:14:33,178] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"04a1ac11-bc72-4cab-ab24-e9132afd087a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2ab7fd29-16ad-4d9b-982a-342f9d03040b","timestampMs":1705878925536,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-21T23:14:31.795456068Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=964.309µs kafka | [2024-01-21 23:14:33,179] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-apex-pdp | [2024-01-21T23:15:25.544+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-21T23:14:31.798570489Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" kafka | [2024-01-21 23:14:33,183] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-apex-pdp | [2024-01-21T23:15:56.153+00:00|INFO|RequestLog|qtp830863979-29] 172.17.0.2 - policyadmin [21/Jan/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10642 "-" "Prometheus/2.49.1" policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-21T23:14:31.798889472Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=336.744µs kafka | [2024-01-21 23:14:33,194] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-apex-pdp | [2024-01-21T23:16:56.081+00:00|INFO|RequestLog|qtp830863979-30] 172.17.0.2 - policyadmin [21/Jan/2024:23:16:56 +0000] "GET /metrics HTTP/1.1" 200 10644 "-" "Prometheus/2.49.1" policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-21T23:14:31.804685528Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" kafka | [2024-01-21 23:14:33,201] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-21T23:14:31.805599907Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=913.909µs kafka | [2024-01-21 23:14:33,201] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-01-21T23:14:31.810326452Z level=info msg="Executing migration" id="create alert_notification table v1" kafka | [2024-01-21 23:14:33,212] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-21T23:14:31.8121889Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.869708ms kafka | [2024-01-21 23:14:33,213] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-21T23:14:31.816593883Z level=info msg="Executing migration" id="Add column is_default" kafka | [2024-01-21 23:14:33,213] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-21T23:14:31.820268359Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.674116ms kafka | [2024-01-21 23:14:33,214] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-21T23:14:31.825908433Z level=info msg="Executing migration" id="Add column frequency" kafka | [2024-01-21 23:14:33,215] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-21T23:14:31.828956993Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.04871ms kafka | [2024-01-21 23:14:33,237] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-21T23:14:31.833303915Z level=info msg="Executing migration" id="Add column send_reminder" kafka | [2024-01-21 23:14:33,289] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:31.837615567Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.311752ms kafka | [2024-01-21 23:14:33,292] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-21T23:14:31.842139201Z level=info msg="Executing migration" id="Add column disable_resolve_message" kafka | [2024-01-21 23:14:33,356] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-21T23:14:31.845467803Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.329422ms kafka | [2024-01-21 23:14:38,239] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-21T23:14:31.855081266Z level=info msg="Executing migration" id="add index alert_notification org_id & name" kafka | [2024-01-21 23:14:38,241] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-21T23:14:31.856282188Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.203052ms kafka | [2024-01-21 23:15:04,428] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-21T23:14:31.859936893Z level=info msg="Executing migration" id="Update alert table charset" kafka | [2024-01-21 23:15:04,437] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-21T23:14:31.859966154Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=29.421µs kafka | [2024-01-21 23:15:04,455] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:31.863987472Z level=info msg="Executing migration" id="Update alert_notification table charset" kafka | [2024-01-21 23:15:04,468] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-21T23:14:31.864127474Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=140.272µs kafka | [2024-01-21 23:15:04,495] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(9Lf29r26S7WDxCJgkjd7Yg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-21T23:14:31.872967679Z level=info msg="Executing migration" id="create notification_journal table v1" kafka | [2024-01-21 23:15:04,496] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-21T23:14:31.874458804Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.493725ms kafka | [2024-01-21 23:15:04,498] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-21T23:14:31.883094657Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" kafka | [2024-01-21 23:15:04,498] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-21T23:14:31.884307869Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.213192ms kafka | [2024-01-21 23:15:04,503] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-21T23:14:31.889319268Z level=info msg="Executing migration" id="drop alert_notification_journal" kafka | [2024-01-21 23:15:04,503] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | policy-pap | security.providers = null grafana | logger=migrator t=2024-01-21T23:14:31.890155226Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=835.358µs kafka | [2024-01-21 23:15:04,533] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-21T23:14:31.894327176Z level=info msg="Executing migration" id="create alert_notification_state table v1" kafka | [2024-01-21 23:15:04,545] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | -------------- policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-01-21T23:14:31.895151324Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=820.298µs kafka | [2024-01-21 23:15:04,548] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:31.90191198Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" kafka | [2024-01-21 23:15:04,553] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:31.904087591Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=2.179902ms kafka | [2024-01-21 23:15:04,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-21T23:14:31.910790656Z level=info msg="Executing migration" id="Add for to alert table" kafka | [2024-01-21 23:15:04,554] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-21T23:14:31.913673884Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.882818ms kafka | [2024-01-21 23:15:04,557] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-21T23:14:31.916726513Z level=info msg="Executing migration" id="Add column uid in alert_notification" kafka | [2024-01-21 23:15:04,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql kafka | [2024-01-21 23:15:04,565] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(d61QdiLrRDGfXeRddxpvYw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-21T23:14:31.920712922Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.987539ms policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.923961013Z level=info msg="Executing migration" id="Update uid column values in alert_notification" kafka | [2024-01-21 23:15:04,566] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-21T23:14:31.924240616Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=279.153µs kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) grafana | logger=migrator t=2024-01-21T23:14:31.929246804Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.930239974Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=991.8µs kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.934855749Z level=info msg="Executing migration" id="Remove unique index org_id_name" kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:31.935762178Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=905.889µs kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql grafana | logger=migrator t=2024-01-21T23:14:31.951200607Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.957875362Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.676525ms kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-01-21T23:14:31.963390465Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:31.963465486Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=74.741µs kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-21T23:14:31.967272433Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-21T23:14:31.968292263Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.01956ms policy-db-migrator | kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-21T23:14:31.971618725Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-01-21T23:14:31.972777976Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.159261ms policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-01-21T23:14:31.978424061Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-21T23:14:31.978684973Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=255.532µs policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-21T23:14:31.983684912Z level=info msg="Executing migration" id="create annotation table v5" policy-db-migrator | kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | grafana | logger=migrator t=2024-01-21T23:14:31.984628291Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=942.289µs policy-db-migrator | kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-21T23:14:31.988141965Z level=info msg="Executing migration" id="add index annotation 0 v3" policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-01-21 23:15:04,570] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-21T23:14:31.989085514Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=943.099µs policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,570] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878903835 grafana | logger=migrator t=2024-01-21T23:14:31.99284579Z level=info msg="Executing migration" id="add index annotation 1 v3" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) kafka | [2024-01-21 23:15:04,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:31.994727459Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.881569ms policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.000639156Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-db-migrator | policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.001746997Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.106081ms policy-db-migrator | policy-pap | [2024-01-21T23:15:03.836+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=988c2327-3928-4e75-b348-c4ca60151503, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.006099129Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-pap | [2024-01-21T23:15:03.836+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0096ba3d-86d0-4a50-8361-ec89b03a0194, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.007689074Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.589505ms policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:03.836+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3bade7b5-8875-4f5c-b873-2f3ab75fe5de, alive=false, publisher=null]]: starting kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.013615661Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-pap | [2024-01-21T23:15:03.875+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.014849923Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.237892ms policy-db-migrator | -------------- policy-pap | acks = -1 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.018533748Z level=info msg="Executing migration" id="Update annotation table charset" policy-db-migrator | policy-pap | auto.include.jmx.reporter = true kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.018571558Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=39.71µs policy-db-migrator | policy-pap | batch.size = 16384 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.023930369Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.028149469Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.21814ms policy-db-migrator | -------------- policy-pap | buffer.memory = 33554432 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.032669943Z level=info msg="Executing migration" id="Drop category_id index" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.03349561Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=825.287µs policy-pap | client.id = producer-1 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.037472758Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-pap | compression.type = none kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.043812569Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.334891ms policy-pap | connections.max.idle.ms = 540000 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.049518533Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-pap | delivery.timeout.ms = 120000 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql grafana | logger=migrator t=2024-01-21T23:14:32.050014268Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=496.395µs policy-pap | enable.idempotence = true policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.055821923Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-pap | interceptor.classes = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) grafana | logger=migrator t=2024-01-21T23:14:32.058515759Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=2.687036ms policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.063551817Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-pap | linger.ms = 0 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.064514246Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=962.519µs policy-pap | max.block.ms = 60000 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.068476655Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-01-21T23:14:32.082992723Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.508779ms policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-pap | max.request.size = 1048576 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.089108281Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.089720027Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=619.486µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.094658444Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-db-migrator | -------------- policy-pap | metric.reporters = [] kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.095555623Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=897.329µs policy-db-migrator | policy-pap | metrics.num.samples = 2 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.09944547Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-db-migrator | policy-pap | metrics.recording.level = INFO kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.099728603Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=283.433µs policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-01-21 23:15:04,595] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.105808421Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-01-21 23:15:04,596] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-21T23:14:32.10678115Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=978.03µs policy-pap | partitioner.class = null kafka | [2024-01-21 23:15:04,597] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.112916338Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-pap | partitioner.ignore.keys = false kafka | [2024-01-21 23:15:04,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.113207141Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=285.923µs policy-pap | receive.buffer.bytes = 32768 kafka | [2024-01-21 23:15:04,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.118803774Z level=info msg="Executing migration" id="Add created time to annotation table" policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-01-21T23:14:32.123330508Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.534054ms policy-pap | reconnect.backoff.ms = 50 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.129529347Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-pap | request.timeout.ms = 30000 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) grafana | logger=migrator t=2024-01-21T23:14:32.134422314Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.895317ms policy-pap | retries = 2147483647 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.137902387Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-pap | retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.138704365Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=802.028µs kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.143691792Z level=info msg="Executing migration" id="Add index for updated in annotation table" kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-01-21T23:14:32.144668742Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=977.24µs kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.147961863Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) grafana | logger=migrator t=2024-01-21T23:14:32.148258356Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=329.213µs kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.151425546Z level=info msg="Executing migration" id="Add epoch_end column" kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.156093651Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.667065ms kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.163504541Z level=info msg="Executing migration" id="Add index for epoch_end" kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-21T23:14:32.16446623Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=965.659µs kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql grafana | logger=migrator t=2024-01-21T23:14:32.168990104Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.169174835Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=181.841µs kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) grafana | logger=migrator t=2024-01-21T23:14:32.172495557Z level=info msg="Executing migration" id="Move region to single row" kafka | [2024-01-21 23:15:04,599] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.172873401Z level=info msg="Migration successfully executed" id="Move region to single row" duration=377.594µs kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.17805979Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" kafka | [2024-01-21 23:15:04,602] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.179371193Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.310853ms kafka | [2024-01-21 23:15:04,603] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-01-21T23:14:32.184674993Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.186021566Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.349223ms kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) grafana | logger=migrator t=2024-01-21T23:14:32.190311107Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.191440728Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.140731ms kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.197120542Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-21T23:14:32.197987971Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=866.909µs policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-21T23:14:32.20524434Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-21T23:14:32.206775614Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.536224ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:32.215246275Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-21T23:14:32.215993482Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=747.667µs policy-db-migrator | kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-21T23:14:32.220942999Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-db-migrator | kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-21T23:14:32.221078651Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=139.342µs policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-21T23:14:32.229944235Z level=info msg="Executing migration" id="create test_data table" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-21T23:14:32.231123177Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.183852ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-21T23:14:32.236193335Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.providers = null grafana | logger=migrator t=2024-01-21T23:14:32.237064863Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=873.368µs policy-db-migrator | kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-21T23:14:32.242591726Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-db-migrator | kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:32.243451734Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=860.108µs policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-21T23:14:32.246729235Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-21T23:14:32.247600964Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=871.419µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) kafka | [2024-01-21 23:15:04,607] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-21T23:14:32.252586531Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,607] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-21T23:14:32.252792883Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=209.222µs policy-db-migrator | kafka | [2024-01-21 23:15:04,607] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-21T23:14:32.256852602Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-db-migrator | kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.257215966Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=363.524µs policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-21T23:14:32.267428903Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-01-21T23:14:32.267527754Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=103.511µs kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.272153688Z level=info msg="Executing migration" id="create team table" kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.273217118Z level=info msg="Migration successfully executed" id="create team table" duration=1.06294ms kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.278247016Z level=info msg="Executing migration" id="add index team.org_id" kafka | [2024-01-21 23:15:04,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-01-21T23:14:32.27966459Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.417794ms kafka | [2024-01-21 23:15:04,610] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.283381945Z level=info msg="Executing migration" id="add unique index team_org_id_name" kafka | [2024-01-21 23:15:04,611] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) grafana | logger=migrator t=2024-01-21T23:14:32.284293824Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=911.089µs kafka | [2024-01-21 23:15:04,675] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.288652126Z level=info msg="Executing migration" id="Add column uid in team" kafka | [2024-01-21 23:15:04,692] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.293623533Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.969877ms policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-21 23:15:04,694] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.303969252Z level=info msg="Executing migration" id="Update uid column values in team" policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-21 23:15:04,695] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-01-21T23:14:32.304293635Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=326.243µs policy-pap | ssl.truststore.certificates = null kafka | [2024-01-21 23:15:04,697] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(9Lf29r26S7WDxCJgkjd7Yg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.312902887Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-pap | ssl.truststore.location = null kafka | [2024-01-21 23:15:04,709] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-01-21T23:14:32.314078138Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.178901ms policy-pap | ssl.truststore.password = null kafka | [2024-01-21 23:15:04,719] INFO [Broker id=1] Finished LeaderAndIsr request in 162ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.320412029Z level=info msg="Executing migration" id="create team member table" policy-pap | ssl.truststore.type = JKS kafka | [2024-01-21 23:15:04,722] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=9Lf29r26S7WDxCJgkjd7Yg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.321151436Z level=info msg="Migration successfully executed" id="create team member table" duration=739.127µs policy-pap | transaction.timeout.ms = 60000 kafka | [2024-01-21 23:15:04,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.324346567Z level=info msg="Executing migration" id="add index team_member.org_id" policy-pap | transactional.id = null kafka | [2024-01-21 23:15:04,731] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-01-21T23:14:32.325423777Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.07557ms policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-21 23:15:04,734] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.332495114Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-pap | kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-01-21T23:14:32.333602205Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.113371ms policy-pap | [2024-01-21T23:15:03.892+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.336682074Z level=info msg="Executing migration" id="add index team_member.team_id" policy-pap | [2024-01-21T23:15:03.910+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.337366921Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=684.467µs policy-pap | [2024-01-21T23:15:03.910+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.340623762Z level=info msg="Executing migration" id="Add column email to team table" policy-pap | [2024-01-21T23:15:03.910+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878903910 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | [2024-01-21T23:15:03.910+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3bade7b5-8875-4f5c-b873-2f3ab75fe5de, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.345060874Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.436442ms kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.352960689Z level=info msg="Executing migration" id="Add column external to team_member table" policy-pap | [2024-01-21T23:15:03.911+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a089b753-ae7e-4ef2-9693-63ba0de08080, alive=false, publisher=null]]: starting policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-pap | [2024-01-21T23:15:03.911+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.357428212Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.467283ms policy-db-migrator | -------------- policy-pap | acks = -1 grafana | logger=migrator t=2024-01-21T23:14:32.363208587Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-db-migrator | kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-01-21T23:14:32.36771873Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.512283ms policy-db-migrator | policy-pap | batch.size = 16384 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.371097192Z level=info msg="Executing migration" id="create dashboard acl table" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.371957251Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=860.159µs policy-pap | buffer.memory = 33554432 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.377441493Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.378331171Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=893.838µs policy-db-migrator | -------------- policy-pap | client.id = producer-2 grafana | logger=migrator t=2024-01-21T23:14:32.383561351Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-db-migrator | kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | compression.type = none grafana | logger=migrator t=2024-01-21T23:14:32.384323449Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=761.498µs policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-01-21T23:14:32.387175416Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-01-21T23:14:32.387843523Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=667.777µs policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-01-21T23:14:32.391972082Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-21T23:14:32.392691319Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=719.147µs policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-01-21T23:14:32.400549804Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.401321141Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=771.057µs kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | max.block.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.404900125Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-01-21T23:14:32.405555702Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=655.416µs policy-pap | max.request.size = 1048576 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.408920364Z level=info msg="Executing migration" id="add index dashboard_permission" policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.40954675Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=626.676µs policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-01-21T23:14:32.41583247Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.416224883Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=392.704µs policy-db-migrator | kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-21T23:14:32.419205742Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-db-migrator | kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-21T23:14:32.419373043Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=166.761µs policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-21T23:14:32.423977637Z level=info msg="Executing migration" id="create tag table" kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-21T23:14:32.424475822Z level=info msg="Migration successfully executed" id="create tag table" duration=498.385µs policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-01-21T23:14:32.427758153Z level=info msg="Executing migration" id="add index tag.key_value" policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.428607461Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=848.948µs policy-pap | partitioner.class = null kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.434868611Z level=info msg="Executing migration" id="create login attempt table" policy-db-migrator | policy-pap | partitioner.ignore.keys = false kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.435361836Z level=info msg="Migration successfully executed" id="create login attempt table" duration=493.155µs policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | receive.buffer.bytes = 32768 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.443063059Z level=info msg="Executing migration" id="add index login_attempt.username" policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.443697705Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=634.636µs kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.449369839Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-pap | retries = 2147483647 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.450257808Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=892.539µs kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-01-21T23:14:32.453762981Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.472943165Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.177984ms kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-21T23:14:32.478906102Z level=info msg="Executing migration" id="create login_attempt v2" kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.479596778Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=690.457µs kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.486814507Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-01-21T23:14:32.48822645Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.411553ms kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.492152718Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-21T23:14:32.492610722Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=457.754µs policy-pap | sasl.login.callback.handler.class = null kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.496699461Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-pap | sasl.login.class = null kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.497279047Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=576.276µs policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.502083383Z level=info msg="Executing migration" id="create user auth table" policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-01-21T23:14:32.502746129Z level=info msg="Migration successfully executed" id="create user auth table" duration=662.526µs policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.506227202Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-21T23:14:32.507636565Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.408993ms policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.511686684Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.511786565Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=100.941µs policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-21 23:15:04,762] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.521004673Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-01-21 23:15:04,762] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.529164041Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.157828ms policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-01-21 23:15:04,762] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.532613974Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.536213679Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.597864ms policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.539867163Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.544813711Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.945617ms policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.548621727Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.553513933Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.895096ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.566101924Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.567193794Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.096431ms policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.571318063Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.575035219Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.717016ms policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.580954425Z level=info msg="Executing migration" id="create server_lock table" policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.58147798Z level=info msg="Migration successfully executed" id="create server_lock table" duration=523.465µs policy-pap | security.providers = null kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-01-21T23:14:32.584984794Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | send.buffer.bytes = 131072 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.5856272Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=642.266µs policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.589228945Z level=info msg="Executing migration" id="create user auth token table" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-21T23:14:32.58978338Z level=info msg="Migration successfully executed" id="create user auth token table" duration=555.105µs policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.595437374Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | ssl.engine.factory.class = null kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.596246451Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=808.887µs policy-pap | ssl.key.password = null policy-db-migrator | > upgrade 0100-pdp.sql kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.600679004Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.60134685Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=667.656µs policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.607390118Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-pap | ssl.keystore.key = null policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.608123455Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=733.147µs policy-pap | ssl.keystore.location = null policy-db-migrator | kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.613857949Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.617621465Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.763146ms policy-pap | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.621117689Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.622623643Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.506964ms policy-pap | ssl.provider = null policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.625923245Z level=info msg="Executing migration" id="create cache_data table" policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.627119086Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.195752ms policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.633297855Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-pap | ssl.truststore.certificates = null policy-db-migrator | kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.633976461Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=679.276µs policy-pap | ssl.truststore.location = null policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.638204082Z level=info msg="Executing migration" id="create short_url table v1" policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.63904384Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=839.778µs policy-pap | ssl.truststore.type = JKS policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.642447302Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.643582883Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.134251ms policy-pap | transactional.id = null policy-db-migrator | kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.651041505Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.651117305Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=77.101µs policy-pap | policy-db-migrator | > upgrade 0130-pdpstatistics.sql kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.658129702Z level=info msg="Executing migration" id="delete alert_definition table" policy-pap | [2024-01-21T23:15:03.913+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.658194013Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=64.471µs policy-pap | [2024-01-21T23:15:03.924+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL grafana | logger=migrator t=2024-01-21T23:14:32.661415963Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | [2024-01-21T23:15:03.924+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.661957569Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=541.396µs kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | [2024-01-21T23:15:03.924+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878903924 policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.670371799Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-01-21T23:15:03.926+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a089b753-ae7e-4ef2-9693-63ba0de08080, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.671324738Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=957.329µs kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | [2024-01-21T23:15:03.926+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql grafana | logger=migrator t=2024-01-21T23:14:32.67675466Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | [2024-01-21T23:15:03.928+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.677542357Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=787.477µs kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-01-21T23:15:03.934+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num grafana | logger=migrator t=2024-01-21T23:14:32.681064851Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-01-21T23:15:03.935+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.681119321Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=54.8µs kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-01-21T23:15:03.940+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.685710235Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-01-21T23:15:03.942+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.686635454Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=925.539µs kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-01-21T23:15:03.942+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests grafana | logger=migrator t=2024-01-21T23:14:32.691012976Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-01-21T23:15:03.944+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) grafana | logger=migrator t=2024-01-21T23:14:32.692034465Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.021499ms grafana | logger=migrator t=2024-01-21T23:14:32.698933981Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-01-21T23:15:03.945+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.700659188Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.725077ms kafka | [2024-01-21 23:15:04,767] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | [2024-01-21T23:15:03.948+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.709021618Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" kafka | [2024-01-21 23:15:04,767] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) policy-pap | [2024-01-21T23:15:03.944+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.710927226Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.904468ms kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:03.953+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.499 seconds (process running for 12.156) policy-db-migrator | > upgrade 0150-pdpstatistics.sql grafana | logger=migrator t=2024-01-21T23:14:32.715462509Z level=info msg="Executing migration" id="Add column paused in alert_definition" kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.397+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: -jrszSKtSKq5TnXDeh3xeA policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.722446206Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.982947ms kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.398+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL grafana | logger=migrator t=2024-01-21T23:14:32.73025691Z level=info msg="Executing migration" id="drop alert_definition table" kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.398+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: -jrszSKtSKq5TnXDeh3xeA policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.73128966Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.03223ms kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.401+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: -jrszSKtSKq5TnXDeh3xeA policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.736237977Z level=info msg="Executing migration" id="delete alert_definition_version table" kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.479+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.736327958Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=87.481µs kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.479+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Cluster ID: -jrszSKtSKq5TnXDeh3xeA policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-21T23:14:32.739714001Z level=info msg="Executing migration" id="recreate alert_definition_version table" kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.483+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.74072638Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.005839ms kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.485+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME grafana | logger=migrator t=2024-01-21T23:14:32.74380001Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.504+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.745364855Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.563904ms kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.599+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.751476983Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-21T23:15:04.614+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.752503213Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.02558ms kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-pap | [2024-01-21T23:15:04.717+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-21T23:14:32.756735753Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.756811594Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=76.031µs kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | UPDATE jpapdpstatistics_enginestats a grafana | logger=migrator t=2024-01-21T23:14:32.760140765Z level=info msg="Executing migration" id="drop alert_definition_version table" kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | JOIN pdpstatistics b grafana | logger=migrator t=2024-01-21T23:14:32.761543319Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.401754ms policy-pap | [2024-01-21T23:15:04.729+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp grafana | logger=migrator t=2024-01-21T23:14:32.765924691Z level=info msg="Executing migration" id="create alert_instance table" policy-pap | [2024-01-21T23:15:05.354+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | SET a.id = b.id grafana | logger=migrator t=2024-01-21T23:14:32.767214663Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.289812ms policy-pap | [2024-01-21T23:15:05.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.772835687Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-pap | [2024-01-21T23:15:05.371+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] (Re-)joining group kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.774916177Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=2.08323ms policy-pap | [2024-01-21T23:15:05.377+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.778326509Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-pap | [2024-01-21T23:15:05.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Request joining group due to: need to re-join with the given member-id: consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-21T23:14:32.779215008Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=888.049µs policy-pap | [2024-01-21T23:15:05.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.782706271Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-pap | [2024-01-21T23:15:05.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-01-21T23:14:32.788229114Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.521783ms policy-pap | [2024-01-21T23:15:05.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:05.421+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) grafana | logger=migrator t=2024-01-21T23:14:32.79304894Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:05.421+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] (Re-)joining group grafana | logger=migrator t=2024-01-21T23:14:32.79413922Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.08967ms policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-01-21T23:14:32.797324851Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | [2024-01-21T23:15:08.447+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b', protocol='range'} policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.79832028Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=995.699µs policy-pap | [2024-01-21T23:15:08.448+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Successfully joined group with generation Generation{generationId=1, memberId='consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd', protocol='range'} policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.801573201Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | [2024-01-21T23:15:08.455+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Finished assignment for group at generation 1: {consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.839635824Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=38.065973ms policy-pap | [2024-01-21T23:15:08.455+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.846286628Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | [2024-01-21T23:15:08.495+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Successfully synced group in generation Generation{generationId=1, memberId='consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd', protocol='range'} policy-db-migrator | kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.8790478Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=32.756102ms policy-pap | [2024-01-21T23:15:08.495+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.884358651Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | [2024-01-21T23:15:08.500+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,771] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.885391831Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.03291ms policy-pap | [2024-01-21T23:15:08.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b', protocol='range'} policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) kafka | [2024-01-21 23:15:04,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.892216906Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-pap | [2024-01-21T23:15:08.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.893818391Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.600905ms policy-pap | [2024-01-21T23:15:08.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | kafka | [2024-01-21 23:15:04,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.910844814Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-pap | [2024-01-21T23:15:08.522+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.917454487Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.610783ms policy-pap | [2024-01-21T23:15:08.528+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | > upgrade 0210-sequence.sql kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:08.544+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.921195973Z level=info msg="Executing migration" id="create alert_rule table" kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-01-21T23:14:32.922113712Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=917.729µs policy-pap | [2024-01-21T23:15:08.546+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.927362612Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-pap | [2024-01-21T23:15:09.304+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.928602713Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.234641ms policy-pap | [2024-01-21T23:15:09.304+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-db-migrator | kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.943237633Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-pap | [2024-01-21T23:15:09.307+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms policy-db-migrator | kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.945858208Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=2.629705ms policy-pap | [2024-01-21T23:15:25.299+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-db-migrator | > upgrade 0220-sequence.sql kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.952647843Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | [] policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:32.953870774Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.222511ms policy-pap | [2024-01-21T23:15:25.300+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-01-21T23:14:32.963314824Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"317323ab-8653-4275-bd16-05c52ce9a052","timestampMs":1705878925260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.963607597Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=298.383µs kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.304+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.970887347Z level=info msg="Executing migration" id="add column for to alert_rule" kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"317323ab-8653-4275-bd16-05c52ce9a052","timestampMs":1705878925260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:32.976670402Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.778715ms kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.308+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-01-21T23:14:32.981743821Z level=info msg="Executing migration" id="add column annotations to alert_rule" kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.387+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.986228653Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.482432ms kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.388+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting listener policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-01-21T23:14:32.989129941Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.388+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting timer policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:32.9952672Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.136719ms kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.389+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=63abcfac-b36b-46ca-b5a5-4a747a0bd5bc, expireMs=1705878955389] grafana | logger=migrator t=2024-01-21T23:14:33.00267722Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.391+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting enqueue grafana | logger=migrator t=2024-01-21T23:14:33.003580649Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=903.749µs kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.392+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate started grafana | logger=migrator t=2024-01-21T23:14:33.007625767Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-pap | [2024-01-21T23:15:25.392+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=63abcfac-b36b-46ca-b5a5-4a747a0bd5bc, expireMs=1705878955389] grafana | logger=migrator t=2024-01-21T23:14:33.008929399Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.299912ms kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.395+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-21T23:14:33.01220968Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-21T23:14:33.018214016Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.003676ms kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.444+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-21T23:14:33.024683747Z level=info msg="Executing migration" id="add panel_id column to alert_rule" kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-21T23:14:33.032532101Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.849074ms kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.444+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-21T23:14:33.038263714Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-pap | [2024-01-21T23:15:25.456+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-21T23:14:33.039231313Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=976.289µs kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.042583735Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | DROP TABLE IF EXISTS toscatrigger grafana | logger=migrator t=2024-01-21T23:14:33.04852197Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.930475ms kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.456+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.054188724Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.469+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.060617214Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.43022ms policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"15fcabe4-fb3e-47f6-b4c1-43b4541365cb","timestampMs":1705878925454,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.065864183Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-pap | [2024-01-21T23:15:25.471+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-01-21T23:14:33.065914674Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=51.051µs policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"15fcabe4-fb3e-47f6-b4c1-43b4541365cb","timestampMs":1705878925454,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.069726459Z level=info msg="Executing migration" id="create alert_rule_version table" policy-pap | [2024-01-21T23:15:25.474+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB grafana | logger=migrator t=2024-01-21T23:14:33.070672918Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=945.029µs policy-pap | [2024-01-21T23:15:25.478+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c3d46f28-2b6f-4c92-8ce2-04ffb23d1149","timestampMs":1705878925459,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-21T23:14:33.076409212Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping grafana | logger=migrator t=2024-01-21T23:14:33.077732824Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.327452ms kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping enqueue grafana | logger=migrator t=2024-01-21T23:14:33.087772999Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping timer grafana | logger=migrator t=2024-01-21T23:14:33.089150272Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.378042ms kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=63abcfac-b36b-46ca-b5a5-4a747a0bd5bc, expireMs=1705878955389] grafana | logger=migrator t=2024-01-21T23:14:33.093160829Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping listener grafana | logger=migrator t=2024-01-21T23:14:33.09326948Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=108.981µs kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopped grafana | logger=migrator t=2024-01-21T23:14:33.101610608Z level=info msg="Executing migration" id="add column for to alert_rule_version" kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate successful grafana | logger=migrator t=2024-01-21T23:14:33.109934567Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.329208ms kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 start publishing next request grafana | logger=migrator t=2024-01-21T23:14:33.115402618Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange starting grafana | logger=migrator t=2024-01-21T23:14:33.121667687Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.265039ms kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange starting listener grafana | logger=migrator t=2024-01-21T23:14:33.127559442Z level=info msg="Executing migration" id="add column labels to alert_rule_version" kafka | [2024-01-21 23:15:04,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange starting timer grafana | logger=migrator t=2024-01-21T23:14:33.13375947Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.201588ms kafka | [2024-01-21 23:15:04,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=a10cd6bc-dc68-4d18-bc08-45c43b208d80, expireMs=1705878955496] grafana | logger=migrator t=2024-01-21T23:14:33.140672705Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange starting enqueue grafana | logger=migrator t=2024-01-21T23:14:33.145173357Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.497372ms kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange started grafana | logger=migrator t=2024-01-21T23:14:33.148190105Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-pap | [2024-01-21T23:15:25.497+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=a10cd6bc-dc68-4d18-bc08-45c43b208d80, expireMs=1705878955496] grafana | logger=migrator t=2024-01-21T23:14:33.152597967Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.407302ms kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.498+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-21T23:14:33.156510653Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-21T23:14:33.156593244Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=83.521µs kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.507+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-21T23:14:33.162160066Z level=info msg="Executing migration" id=create_alert_configuration_table kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscaproperty grafana | logger=migrator t=2024-01-21T23:14:33.162803222Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=645.766µs policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c3d46f28-2b6f-4c92-8ce2-04ffb23d1149","timestampMs":1705878925459,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.168273714Z level=info msg="Executing migration" id="Add column default in alert_configuration" policy-pap | [2024-01-21T23:15:25.508+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 63abcfac-b36b-46ca-b5a5-4a747a0bd5bc kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.172845186Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.571072ms policy-pap | [2024-01-21T23:15:25.513+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-21 23:15:04,777] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.178928554Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-db-migrator | policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-21 23:15:04,777] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.179046045Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=122.721µs policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-pap | [2024-01-21T23:15:25.513+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.183442256Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.518+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.189010558Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.566532ms policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"52c4938e-a200-46d0-81f3-21a9a4d3de9b","timestampMs":1705878925511,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.193257918Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.519+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a10cd6bc-dc68-4d18-bc08-45c43b208d80 kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.194028145Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=769.547µs policy-db-migrator | policy-pap | [2024-01-21T23:15:25.524+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.197566749Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | -------------- policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.202287433Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.718854ms policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.20838822Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-01-21T23:14:33.209314739Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=930.839µs policy-pap | [2024-01-21T23:15:25.524+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,780] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.214644349Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-pap | [2024-01-21T23:15:25.526+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.215934921Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.293352ms policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"52c4938e-a200-46d0-81f3-21a9a4d3de9b","timestampMs":1705878925511,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.222704544Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-pap | [2024-01-21T23:15:25.526+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopping policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.227914223Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=5.212359ms policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopping enqueue policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.231491277Z level=info msg="Executing migration" id="create provenance_type table" policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopping timer policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.232083602Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=591.185µs policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=a10cd6bc-dc68-4d18-bc08-45c43b208d80, expireMs=1705878955496] policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.236473973Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopping listener policy-db-migrator | kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.237313701Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=840.498µs policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopped policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.243804292Z level=info msg="Executing migration" id="create alert_image table" policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange successful policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.245257806Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.454264ms policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 start publishing next request policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.252272122Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting policy-db-migrator | kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.253590464Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.322983ms policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting listener policy-db-migrator | kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.257246358Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting timer policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.257343729Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=98.641µs policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=04a1ac11-bc72-4cab-ab24-e9132afd087a, expireMs=1705878955527] policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.26069086Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting enqueue policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.26166953Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=978.63µs policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate started policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-01-21T23:15:25.528+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.267157061Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"04a1ac11-bc72-4cab-ab24-e9132afd087a","timestampMs":1705878925515,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.268260962Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.104281ms kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-01-21T23:15:25.534+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | > upgrade 0100-upgrade.sql grafana | logger=migrator t=2024-01-21T23:14:33.27343014Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"04a1ac11-bc72-4cab-ab24-e9132afd087a","timestampMs":1705878925515,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.273886244Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-01-21T23:15:25.534+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | select 'upgrade to 1100 completed' as msg grafana | logger=migrator t=2024-01-21T23:14:33.276749761Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-01-21T23:15:25.541+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.277279376Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=529.685µs kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"04a1ac11-bc72-4cab-ab24-e9132afd087a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2ab7fd29-16ad-4d9b-982a-342f9d03040b","timestampMs":1705878925536,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.282342554Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | [2024-01-21T23:15:25.542+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 04a1ac11-bc72-4cab-ab24-e9132afd087a policy-db-migrator | msg grafana | logger=migrator t=2024-01-21T23:14:33.283150081Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=807.617µs kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | [2024-01-21T23:15:25.543+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | upgrade to 1100 completed grafana | logger=migrator t=2024-01-21T23:14:33.289042466Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"04a1ac11-bc72-4cab-ab24-e9132afd087a","timestampMs":1705878925515,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.299432084Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=10.388968ms kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | [2024-01-21T23:15:25.543+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql grafana | logger=migrator t=2024-01-21T23:14:33.302740855Z level=info msg="Executing migration" id="create library_element table v1" kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | [2024-01-21T23:15:25.546+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.303637633Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=896.788µs kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"04a1ac11-bc72-4cab-ab24-e9132afd087a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2ab7fd29-16ad-4d9b-982a-342f9d03040b","timestampMs":1705878925536,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-21T23:14:33.308923613Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping grafana | logger=migrator t=2024-01-21T23:14:33.310148994Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.225481ms kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping enqueue grafana | logger=migrator t=2024-01-21T23:14:33.313495626Z level=info msg="Executing migration" id="create library_element_connection table v1" kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.314358754Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=862.238µs kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping timer policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-01-21T23:14:33.317974598Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=04a1ac11-bc72-4cab-ab24-e9132afd087a, expireMs=1705878955527] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.319789255Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.813837ms kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping listener policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-01-21T23:14:33.326089394Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopped policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.327220914Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.13136ms kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-pap | [2024-01-21T23:15:25.551+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate successful policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.331526135Z level=info msg="Executing migration" id="increase max description length to 2048" kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-pap | [2024-01-21T23:15:25.551+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 has no more requests grafana | logger=migrator t=2024-01-21T23:14:33.331612276Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=88.031µs policy-pap | [2024-01-21T23:15:29.952+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.3352586Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-pap | [2024-01-21T23:15:29.960+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.335396181Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=138.351µs policy-pap | [2024-01-21T23:15:30.395+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.340869493Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-pap | [2024-01-21T23:15:30.988+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup policy-db-migrator | kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.34161139Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=742.167µs policy-pap | [2024-01-21T23:15:30.989+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup policy-db-migrator | kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.346580416Z level=info msg="Executing migration" id="create data_keys table" policy-pap | [2024-01-21T23:15:31.524+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.34803764Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.456794ms policy-pap | [2024-01-21T23:15:31.738+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.354560001Z level=info msg="Executing migration" id="create secrets table" policy-pap | [2024-01-21T23:15:31.853+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.355628391Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.0678ms policy-pap | [2024-01-21T23:15:31.853+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.363746807Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-pap | [2024-01-21T23:15:31.854+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup policy-db-migrator | kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.414091749Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=50.342592ms policy-pap | [2024-01-21T23:15:31.869+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-21T23:15:31Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-21T23:15:31Z, user=policyadmin)] policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.417412151Z level=info msg="Executing migration" id="add name column into data_keys" policy-pap | [2024-01-21T23:15:32.623+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.424591788Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.178228ms policy-pap | [2024-01-21T23:15:32.624+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.42907785Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-pap | [2024-01-21T23:15:32.624+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-db-migrator | kafka | [2024-01-21 23:15:04,795] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-01-21T23:14:33.429228811Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=148.611µs policy-pap | [2024-01-21T23:15:32.624+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup policy-db-migrator | kafka | [2024-01-21 23:15:04,795] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.435933544Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-pap | [2024-01-21T23:15:32.624+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-01-21 23:15:04,800] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.483307809Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=47.375145ms policy-pap | [2024-01-21T23:15:32.634+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-21T23:15:32Z, user=policyadmin)] policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,801] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.48878118Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-pap | [2024-01-21T23:15:33.000+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-21 23:15:04,802] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.534040514Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=45.258764ms policy-pap | [2024-01-21T23:15:33.000+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,802] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.537215174Z level=info msg="Executing migration" id="create kv_store table v1" policy-pap | [2024-01-21T23:15:33.000+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-db-migrator | kafka | [2024-01-21 23:15:04,802] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.537846Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=630.836µs policy-pap | [2024-01-21T23:15:33.001+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | -------------- kafka | [2024-01-21 23:15:04,815] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.541569495Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-pap | [2024-01-21T23:15:33.001+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-01-21 23:15:04,816] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.542745306Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.173441ms policy-pap | [2024-01-21T23:15:33.001+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.547693503Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" kafka | [2024-01-21 23:15:04,816] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | [2024-01-21T23:15:33.012+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-21T23:15:33Z, user=policyadmin)] policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.548015866Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=323.923µs kafka | [2024-01-21 23:15:04,816] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-21T23:15:53.594+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.551909292Z level=info msg="Executing migration" id="create permission table" kafka | [2024-01-21 23:15:04,816] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-21T23:15:53.595+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-01-21T23:14:33.55274554Z level=info msg="Migration successfully executed" id="create permission table" duration=836.118µs kafka | [2024-01-21 23:15:04,825] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-21T23:15:55.389+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=63abcfac-b36b-46ca-b5a5-4a747a0bd5bc, expireMs=1705878955389] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.562534962Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-01-21 23:15:04,826] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-21T23:15:55.496+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=a10cd6bc-dc68-4d18-bc08-45c43b208d80, expireMs=1705878955496] policy-db-migrator | kafka | [2024-01-21 23:15:04,826] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.564257508Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.722086ms policy-db-migrator | kafka | [2024-01-21 23:15:04,826] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-01-21T23:14:33.570165823Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-01-21 23:15:04,826] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.571329564Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.163641ms kafka | [2024-01-21 23:15:04,838] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-01-21T23:14:33.574594345Z level=info msg="Executing migration" id="create role table" kafka | [2024-01-21 23:15:04,839] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.575436823Z level=info msg="Migration successfully executed" id="create role table" duration=839.888µs kafka | [2024-01-21 23:15:04,839] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.581973124Z level=info msg="Executing migration" id="add column display_name" kafka | [2024-01-21 23:15:04,840] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.589948969Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.975635ms kafka | [2024-01-21 23:15:04,840] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE pdpstatistics grafana | logger=migrator t=2024-01-21T23:14:33.597336268Z level=info msg="Executing migration" id="add column group_name" kafka | [2024-01-21 23:15:04,848] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.604409734Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.073446ms kafka | [2024-01-21 23:15:04,849] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.607650955Z level=info msg="Executing migration" id="add index role.org_id" kafka | [2024-01-21 23:15:04,849] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.608441112Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=789.487µs kafka | [2024-01-21 23:15:04,849] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-21T23:14:33.611737013Z level=info msg="Executing migration" id="add unique index role_org_id_name" kafka | [2024-01-21 23:15:04,849] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.612920854Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.183221ms kafka | [2024-01-21 23:15:04,862] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats grafana | logger=migrator t=2024-01-21T23:14:33.618536367Z level=info msg="Executing migration" id="add index role_org_id_uid" kafka | [2024-01-21 23:15:04,863] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.619739218Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.202041ms kafka | [2024-01-21 23:15:04,863] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.623708985Z level=info msg="Executing migration" id="create team role table" kafka | [2024-01-21 23:15:04,864] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.624531493Z level=info msg="Migration successfully executed" id="create team role table" duration=820.538µs kafka | [2024-01-21 23:15:04,864] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0120-statistics_sequence.sql grafana | logger=migrator t=2024-01-21T23:14:33.628349909Z level=info msg="Executing migration" id="add index team_role.org_id" kafka | [2024-01-21 23:15:04,875] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.629692522Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.341553ms kafka | [2024-01-21 23:15:04,876] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-01-21T23:14:33.635261664Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" kafka | [2024-01-21 23:15:04,876] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-21T23:14:33.637210642Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.948298ms kafka | [2024-01-21 23:15:04,876] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-21T23:14:33.64340395Z level=info msg="Executing migration" id="add index team_role.team_id" kafka | [2024-01-21 23:15:04,877] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-01-21T23:14:33.644577321Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.174321ms kafka | [2024-01-21 23:15:04,886] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | name version grafana | logger=migrator t=2024-01-21T23:14:33.648426657Z level=info msg="Executing migration" id="create user role table" kafka | [2024-01-21 23:15:04,887] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policyadmin 1300 grafana | logger=migrator t=2024-01-21T23:14:33.649403097Z level=info msg="Migration successfully executed" id="create user role table" duration=975.709µs kafka | [2024-01-21 23:15:04,887] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | ID script operation from_version to_version tag success atTime grafana | logger=migrator t=2024-01-21T23:14:33.654214212Z level=info msg="Executing migration" id="add index user_role.org_id" kafka | [2024-01-21 23:15:04,888] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 grafana | logger=migrator t=2024-01-21T23:14:33.655506274Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.291333ms kafka | [2024-01-21 23:15:04,888] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 grafana | logger=migrator t=2024-01-21T23:14:33.65935692Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" kafka | [2024-01-21 23:15:04,896] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 grafana | logger=migrator t=2024-01-21T23:14:33.66146735Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.10797ms kafka | [2024-01-21 23:15:04,897] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 grafana | logger=migrator t=2024-01-21T23:14:33.666999861Z level=info msg="Executing migration" id="add index user_role.user_id" policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,897] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.668295664Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.294623ms policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,897] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.676868804Z level=info msg="Executing migration" id="create builtin role table" policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,897] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.678148256Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.279082ms policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,906] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.683829369Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,907] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.685612006Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.781817ms policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,907] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.689651734Z level=info msg="Executing migration" id="add index builtin_role.name" policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,907] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.691442421Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.789527ms policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,908] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.695391318Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,913] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.703421063Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.028835ms policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,914] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.709810143Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,914] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.711317087Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.506564ms policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,914] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.718491644Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,914] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.720276321Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.784077ms policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,925] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.723971426Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,925] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.725668552Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.696586ms policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,925] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.731693848Z level=info msg="Executing migration" id="add unique index role.uid" policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 kafka | [2024-01-21 23:15:04,925] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.732836999Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.143021ms policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,928] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.736489483Z level=info msg="Executing migration" id="create seed assignment table" policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,941] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.737665614Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.174301ms policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,942] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.742488539Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,942] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.744487958Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.999179ms policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,942] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.752364432Z level=info msg="Executing migration" id="add column hidden to role table" policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,942] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.76386236Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.494608ms policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,952] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.767552155Z level=info msg="Executing migration" id="permission kind migration" policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,953] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.774940994Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.387899ms policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,954] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:04,954] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.77989083Z level=info msg="Executing migration" id="permission attribute migration" kafka | [2024-01-21 23:15:04,954] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.787751874Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.859994ms kafka | [2024-01-21 23:15:04,962] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.795438866Z level=info msg="Executing migration" id="permission identifier migration" kafka | [2024-01-21 23:15:04,963] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.803170489Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.731053ms kafka | [2024-01-21 23:15:04,963] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.806700872Z level=info msg="Executing migration" id="add permission identifier index" kafka | [2024-01-21 23:15:04,963] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.807509419Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=808.027µs kafka | [2024-01-21 23:15:04,963] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.810895531Z level=info msg="Executing migration" id="create query_history table v1" kafka | [2024-01-21 23:15:04,974] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.811546667Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=650.896µs kafka | [2024-01-21 23:15:04,975] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.820203989Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" kafka | [2024-01-21 23:15:04,975] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 grafana | logger=migrator t=2024-01-21T23:14:33.822692972Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.491534ms policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,975] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.827754889Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" kafka | [2024-01-21 23:15:04,975] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.828101742Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=347.863µs policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,986] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.833926987Z level=info msg="Executing migration" id="rbac disabled migrator" policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,987] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.834014598Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=88.421µs policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,988] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.84168023Z level=info msg="Executing migration" id="teams permissions migration" policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,988] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.842678459Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=998.089µs policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,988] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.847258392Z level=info msg="Executing migration" id="dashboard permissions" policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 kafka | [2024-01-21 23:15:04,995] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.848245682Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=988.04µs policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:04,996] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.852301979Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:04,996] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.85339098Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.088901ms policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:04,996] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.860091793Z level=info msg="Executing migration" id="drop managed folder create actions" policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:04,996] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.860635938Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=547.795µs policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,008] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.86517873Z level=info msg="Executing migration" id="alerting notification permissions" policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,009] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.865563664Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=384.984µs policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,009] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.872162996Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,009] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.873556859Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.391763ms policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,009] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.88321892Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,016] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.884996426Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.776876ms policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,017] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.890103214Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,017] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.898484643Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.372789ms policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,017] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.901779114Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,018] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.901920195Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=140.431µs policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,025] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.905099795Z level=info msg="Executing migration" id="create correlation table v1" policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,026] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.905982473Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=882.138µs policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,026] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.910368434Z level=info msg="Executing migration" id="add index correlations.uid" policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,026] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.911490265Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.121401ms policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,026] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.917592292Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,032] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.920510019Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.919087ms policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,033] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.924375306Z level=info msg="Executing migration" id="add correlation config column" policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,033] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.933746703Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.370667ms policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,033] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.939890241Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,033] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:33.941852879Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.962898ms policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,040] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:33.945941508Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 kafka | [2024-01-21 23:15:05,040] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:33.947611834Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.666276ms policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 kafka | [2024-01-21 23:15:05,040] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:33.953111435Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:33.98353201Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=30.419905ms kafka | [2024-01-21 23:15:05,041] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:33.988318245Z level=info msg="Executing migration" id="create correlation v2" kafka | [2024-01-21 23:15:05,041] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:33.989019692Z level=info msg="Migration successfully executed" id="create correlation v2" duration=701.247µs kafka | [2024-01-21 23:15:05,047] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:33.992942289Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" kafka | [2024-01-21 23:15:05,048] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:33.99413138Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.188261ms kafka | [2024-01-21 23:15:05,048] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:33.999381639Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-01-21 23:15:05,048] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.001222766Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.840767ms kafka | [2024-01-21 23:15:05,048] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.005654104Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" kafka | [2024-01-21 23:15:05,055] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.00693246Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.278746ms kafka | [2024-01-21 23:15:05,056] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.012251782Z level=info msg="Executing migration" id="copy correlation v1 to v2" kafka | [2024-01-21 23:15:05,056] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.012544646Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=289.664µs kafka | [2024-01-21 23:15:05,056] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.01703182Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" kafka | [2024-01-21 23:15:05,057] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.018258474Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.226344ms kafka | [2024-01-21 23:15:05,063] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.021684154Z level=info msg="Executing migration" id="add provisioning column" kafka | [2024-01-21 23:15:05,064] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.032256158Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.573284ms kafka | [2024-01-21 23:15:05,064] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.036901712Z level=info msg="Executing migration" id="create entity_events table" kafka | [2024-01-21 23:15:05,064] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.037446429Z level=info msg="Migration successfully executed" id="create entity_events table" duration=543.897µs kafka | [2024-01-21 23:15:05,064] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 grafana | logger=migrator t=2024-01-21T23:14:34.04185266Z level=info msg="Executing migration" id="create dashboard public config v1" kafka | [2024-01-21 23:15:05,071] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.042764471Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=911.031µs kafka | [2024-01-21 23:15:05,072] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.046349763Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-01-21 23:15:05,072] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.047049341Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-01-21 23:15:05,072] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.050983347Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-01-21 23:15:05,072] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.051715495Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-01-21 23:15:05,078] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.056121947Z level=info msg="Executing migration" id="Drop old dashboard public config table" kafka | [2024-01-21 23:15:05,079] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.056889876Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=767.179µs kafka | [2024-01-21 23:15:05,079] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.060852922Z level=info msg="Executing migration" id="recreate dashboard public config v1" kafka | [2024-01-21 23:15:05,079] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.061750542Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=896.85µs kafka | [2024-01-21 23:15:05,079] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.065875821Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" kafka | [2024-01-21 23:15:05,086] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.066959713Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.083092ms kafka | [2024-01-21 23:15:05,087] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.073290207Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-01-21 23:15:05,087] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.075038088Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.746541ms kafka | [2024-01-21 23:15:05,087] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.08375729Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-01-21 23:15:05,087] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.084820262Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.062962ms kafka | [2024-01-21 23:15:05,093] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.088659757Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-01-21 23:15:05,094] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.090318306Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.658489ms kafka | [2024-01-21 23:15:05,094] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:36 grafana | logger=migrator t=2024-01-21T23:14:34.094812799Z level=info msg="Executing migration" id="Drop public config table" policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:36 kafka | [2024-01-21 23:15:05,094] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.096090514Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.274105ms policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:36 kafka | [2024-01-21 23:15:05,094] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:34.100081091Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:36 kafka | [2024-01-21 23:15:05,100] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:34.101085423Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.003622ms policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,100] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:34.105830899Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,101] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.107523659Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.69074ms policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,101] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.113407748Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,101] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:34.115170239Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.762331ms policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,108] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:34.119992766Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2101242314321100u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,108] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:34.121146999Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.156763ms policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2101242314321200u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,109] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.125798294Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2101242314321200u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,109] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.157730401Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=31.930907ms policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2101242314321200u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,109] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:34.1611015Z level=info msg="Executing migration" id="add annotations_enabled column" policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2101242314321200u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,115] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:34.169288537Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.182917ms policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2101242314321300u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,116] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:34.174346006Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2101242314321300u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,116] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.181890845Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.542029ms policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2101242314321300u 1 2024-01-21 23:14:37 kafka | [2024-01-21 23:15:05,116] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.186764713Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-01-21 23:15:05,116] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:34.187176208Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=411.925µs kafka | [2024-01-21 23:15:05,128] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:34.192254348Z level=info msg="Executing migration" id="add share column" kafka | [2024-01-21 23:15:05,129] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:34.204708855Z level=info msg="Migration successfully executed" id="add share column" duration=12.461137ms kafka | [2024-01-21 23:15:05,129] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.211742757Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-01-21 23:15:05,129] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.211943699Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=199.762µs kafka | [2024-01-21 23:15:05,129] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,137] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,138] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:34.216415102Z level=info msg="Executing migration" id="create file table" kafka | [2024-01-21 23:15:05,138] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.217275582Z level=info msg="Migration successfully executed" id="create file table" duration=859.85µs kafka | [2024-01-21 23:15:05,138] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.221190229Z level=info msg="Executing migration" id="file table idx: path natural pk" kafka | [2024-01-21 23:15:05,138] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:34.222952369Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.76126ms kafka | [2024-01-21 23:15:05,151] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:34.226929556Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" kafka | [2024-01-21 23:15:05,152] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:34.22893039Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.001314ms kafka | [2024-01-21 23:15:05,152] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.233866508Z level=info msg="Executing migration" id="create file_meta table" kafka | [2024-01-21 23:15:05,152] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.234970591Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.104083ms kafka | [2024-01-21 23:15:05,152] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:34.239262832Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-01-21 23:15:05,160] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-21T23:14:34.242287847Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=3.025006ms kafka | [2024-01-21 23:15:05,160] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-21T23:14:34.254467611Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-01-21 23:15:05,160] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.254548872Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=77.641µs kafka | [2024-01-21 23:15:05,161] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-21T23:14:34.259720403Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2024-01-21T23:14:34.259833734Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=114.301µs grafana | logger=migrator t=2024-01-21T23:14:34.264149565Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-01-21T23:14:34.265041706Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=892.341µs grafana | logger=migrator t=2024-01-21T23:14:34.26879684Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-01-21T23:14:34.269136194Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=339.004µs grafana | logger=migrator t=2024-01-21T23:14:34.273460075Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2024-01-21T23:14:34.275075504Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.613659ms grafana | logger=migrator t=2024-01-21T23:14:34.280599629Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2024-01-21T23:14:34.290818179Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.21923ms grafana | logger=migrator t=2024-01-21T23:14:34.294913628Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2024-01-21T23:14:34.295358183Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=448.475µs grafana | logger=migrator t=2024-01-21T23:14:34.300004238Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2024-01-21T23:14:34.302167133Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.162475ms grafana | logger=migrator t=2024-01-21T23:14:34.305813246Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2024-01-21T23:14:34.306456103Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=648.157µs grafana | logger=migrator t=2024-01-21T23:14:34.309804063Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2024-01-21T23:14:34.310131267Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=326.434µs grafana | logger=migrator t=2024-01-21T23:14:34.315905295Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2024-01-21T23:14:34.316523012Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=617.797µs grafana | logger=migrator t=2024-01-21T23:14:34.321618962Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2024-01-21T23:14:34.332995237Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.376375ms grafana | logger=migrator t=2024-01-21T23:14:34.336193084Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-01-21T23:14:34.34520625Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.007876ms grafana | logger=migrator t=2024-01-21T23:14:34.348703002Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2024-01-21T23:14:34.349490081Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=784.229µs grafana | logger=migrator t=2024-01-21T23:14:34.354357248Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2024-01-21T23:14:34.466060185Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=111.690297ms grafana | logger=migrator t=2024-01-21T23:14:34.469586806Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2024-01-21T23:14:34.470578728Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=991.082µs grafana | logger=migrator t=2024-01-21T23:14:34.475077221Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-01-21T23:14:34.476889692Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.811301ms grafana | logger=migrator t=2024-01-21T23:14:34.482182855Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-01-21T23:14:34.520465186Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=38.283031ms grafana | logger=migrator t=2024-01-21T23:14:34.529593924Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2024-01-21T23:14:34.529804386Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=210.202µs grafana | logger=migrator t=2024-01-21T23:14:34.533321838Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2024-01-21T23:14:34.533670372Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=348.704µs grafana | logger=migrator t=2024-01-21T23:14:34.537547057Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-01-21 23:15:05,161] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,168] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,169] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,169] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,169] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,169] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,179] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,179] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,180] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,180] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,180] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,189] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,190] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,190] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,190] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,190] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,196] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,197] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,197] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,197] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,197] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,202] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,203] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,203] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,203] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,205] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,213] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,214] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,214] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,214] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,214] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,220] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,220] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,221] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,221] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,221] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,229] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,230] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,230] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,230] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,230] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,237] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,237] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,237] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,237] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,237] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,244] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,244] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,244] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,244] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,245] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,255] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,256] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,256] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,256] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,256] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,268] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,268] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,269] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,269] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,269] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,277] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-21 23:15:05,278] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-21 23:15:05,278] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,278] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-21 23:15:05,278] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-01-21T23:14:34.537882781Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=335.454µs grafana | logger=migrator t=2024-01-21T23:14:34.542765739Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-01-21T23:14:34.544191496Z level=info msg="Migration successfully executed" id="create folder table" duration=1.425097ms grafana | logger=migrator t=2024-01-21T23:14:34.548411216Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2024-01-21T23:14:34.550267817Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.855891ms grafana | logger=migrator t=2024-01-21T23:14:34.554494067Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-01-21T23:14:34.55638255Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.887693ms grafana | logger=migrator t=2024-01-21T23:14:34.561072685Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-01-21T23:14:34.561103485Z level=info msg="Migration successfully executed" id="Update folder title length" duration=34.17µs grafana | logger=migrator t=2024-01-21T23:14:34.565886642Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-01-21T23:14:34.567867925Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.980353ms grafana | logger=migrator t=2024-01-21T23:14:34.572784363Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-01-21T23:14:34.574530733Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.74604ms grafana | logger=migrator t=2024-01-21T23:14:34.579390131Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-01-21T23:14:34.580583185Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.192154ms grafana | logger=migrator t=2024-01-21T23:14:34.585415912Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-01-21T23:14:34.586783008Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.366946ms grafana | logger=migrator t=2024-01-21T23:14:34.590775705Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-01-21T23:14:34.592261963Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.486568ms grafana | logger=migrator t=2024-01-21T23:14:34.596920897Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-01-21T23:14:34.598076171Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.154714ms grafana | logger=migrator t=2024-01-21T23:14:34.604732159Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-01-21T23:14:34.605547829Z level=info msg="Migration successfully executed" id="create signing_key table" duration=814.35µs grafana | logger=migrator t=2024-01-21T23:14:34.615324554Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-01-21T23:14:34.617202336Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.876882ms grafana | logger=migrator t=2024-01-21T23:14:34.621467927Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-01-21T23:14:34.623363049Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.894522ms grafana | logger=migrator t=2024-01-21T23:14:34.627305146Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-01-21T23:14:34.62763114Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=309.973µs grafana | logger=migrator t=2024-01-21T23:14:34.631682797Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-01-21T23:14:34.641192239Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.508362ms grafana | logger=migrator t=2024-01-21T23:14:34.648617096Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-01-21T23:14:34.649166622Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=551.226µs grafana | logger=migrator t=2024-01-21T23:14:34.657816024Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-01-21T23:14:34.659747347Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.930903ms grafana | logger=migrator t=2024-01-21T23:14:34.66341234Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-01-21T23:14:34.664365612Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=952.012µs grafana | logger=migrator t=2024-01-21T23:14:34.670782397Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-01-21T23:14:34.672034362Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.252745ms grafana | logger=migrator t=2024-01-21T23:14:34.676811158Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-01-21T23:14:34.677291564Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=481.166µs grafana | logger=migrator t=2024-01-21T23:14:34.68118406Z level=info msg="migrations completed" performed=523 skipped=0 duration=3.941321669s grafana | logger=sqlstore t=2024-01-21T23:14:34.690171066Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-01-21T23:14:34.690485769Z level=info msg="Created default organization" grafana | logger=secrets t=2024-01-21T23:14:34.698287981Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.store t=2024-01-21T23:14:34.721728068Z level=info msg="Loading plugins..." grafana | logger=local.finder t=2024-01-21T23:14:34.759352701Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-01-21T23:14:34.759419502Z level=info msg="Plugins loaded" count=55 duration=37.693484ms grafana | logger=query_data t=2024-01-21T23:14:34.762975834Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-01-21T23:14:34.771582425Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.migration t=2024-01-21T23:14:34.77881233Z level=info msg=Starting grafana | logger=ngalert.migration orgID=1 t=2024-01-21T23:14:34.779773602Z level=info msg="Migrating alerts for organisation" grafana | logger=ngalert.migration orgID=1 t=2024-01-21T23:14:34.780207507Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=ngalert.migration orgID=1 t=2024-01-21T23:14:34.780663332Z level=warn msg="No available receivers" grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-21T23:14:34.783526346Z level=info msg="Completed legacy migration" grafana | logger=infra.usagestats.collector t=2024-01-21T23:14:34.813050694Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-01-21T23:14:34.815672725Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-01-21T23:14:34.831145627Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-01-21T23:14:34.831171237Z level=info msg="finished to provision alerting" grafana | logger=grafanaStorageLogger t=2024-01-21T23:14:34.83142378Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2024-01-21T23:14:34.832702315Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.state.manager t=2024-01-21T23:14:34.83305585Z level=info msg="State cache has been initialized" states=0 duration=383.805µs grafana | logger=ngalert.scheduler t=2024-01-21T23:14:34.83309153Z level=info msg="Starting scheduler" tickInterval=10s grafana | logger=ticker t=2024-01-21T23:14:34.83313254Z level=info msg=starting first_tick=2024-01-21T23:14:40Z grafana | logger=ngalert.multiorg.alertmanager t=2024-01-21T23:14:34.833147941Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2024-01-21T23:14:34.836576961Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=plugins.update.checker t=2024-01-21T23:14:34.937647182Z level=info msg="Update check succeeded" duration=104.455431ms grafana | logger=sqlstore.transactions t=2024-01-21T23:14:34.964382327Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=sqlstore.transactions t=2024-01-21T23:14:34.975559949Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=grafana.update.checker t=2024-01-21T23:14:36.361700661Z level=info msg="Update check succeeded" duration=1.529970587s grafana | logger=infra.usagestats t=2024-01-21T23:15:33.847169346Z level=info msg="Usage stats are ready to report" kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-01-21 23:15:05,287] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,288] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,290] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,293] INFO [Broker id=1] Finished LeaderAndIsr request in 521ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-01-21 23:15:05,295] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=d61QdiLrRDGfXeRddxpvYw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-21 23:15:05,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,304] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,304] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,304] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,306] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,306] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,306] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,308] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,308] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,308] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,309] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,309] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,311] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,311] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,313] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,313] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,313] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 22 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,315] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-21 23:15:05,315] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,315] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,324] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-21 23:15:05,326] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-21 23:15:05,413] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 0096ba3d-86d0-4a50-8361-ec89b03a0194 in Empty state. Created a new member id consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,413] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,430] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,433] INFO [GroupCoordinator 1]: Preparing to rebalance group 0096ba3d-86d0-4a50-8361-ec89b03a0194 in state PreparingRebalance with old generation 0 (__consumer_offsets-42) (reason: Adding new member consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,663] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group e43a1262-c2bd-4185-8b6c-0623a45ad046 in Empty state. Created a new member id consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:05,666] INFO [GroupCoordinator 1]: Preparing to rebalance group e43a1262-c2bd-4185-8b6c-0623a45ad046 in state PreparingRebalance with old generation 0 (__consumer_offsets-44) (reason: Adding new member consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:08,442] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:08,447] INFO [GroupCoordinator 1]: Stabilized group 0096ba3d-86d0-4a50-8361-ec89b03a0194 generation 1 (__consumer_offsets-42) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:08,463] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:08,467] INFO [GroupCoordinator 1]: Assignment received from leader consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd for group 0096ba3d-86d0-4a50-8361-ec89b03a0194 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:08,667] INFO [GroupCoordinator 1]: Stabilized group e43a1262-c2bd-4185-8b6c-0623a45ad046 generation 1 (__consumer_offsets-44) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-21 23:15:08,684] INFO [GroupCoordinator 1]: Assignment received from leader consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b for group e43a1262-c2bd-4185-8b6c-0623a45ad046 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping grafana ... Stopping policy-api ... Stopping kafka ... Stopping prometheus ... Stopping compose_zookeeper_1 ... Stopping simulator ... Stopping mariadb ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing grafana ... Removing policy-api ... Removing kafka ... Removing policy-db-migrator ... Removing prometheus ... Removing compose_zookeeper_1 ... Removing simulator ... Removing mariadb ... Removing compose_zookeeper_1 ... done Removing kafka ... done Removing grafana ... done Removing prometheus ... done Removing policy-api ... done Removing simulator ... done Removing policy-apex-pdp ... done Removing mariadb ... done Removing policy-db-migrator ... done Removing policy-pap ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.C9xkkUvsOC ]] + rsync -av /tmp/tmp.C9xkkUvsOC/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 910,600 bytes received 95 bytes 607,130.00 bytes/sec total size is 910,059 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2083 killed; [ssh-agent] Stopped. Robot results publisher started... -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11468459741754327868.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4735386884158819356.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17078034781118754770.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Zh8L from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10658688192205416812.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config15755228189959814618tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins567138907153378295.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10033323700225331421.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Zh8L from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4580762017676075078.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12289679185341084929.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Zh8L from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4142273024307142563.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Zh8L from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1544 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-14039 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 818 24663 0 6684 30892 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:55:91:a0 brd ff:ff:ff:ff:ff:ff inet 10.30.107.9/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85921sec preferred_lft 85921sec inet6 fe80::f816:3eff:fe55:91a0/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:59:2f:99:71 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14039) 01/21/24 _x86_64_ (8 CPU) 23:10:21 LINUX RESTART (8 CPU) 23:11:02 tps rtps wtps bread/s bwrtn/s 23:12:01 115.44 17.88 97.56 1037.65 50308.90 23:13:01 147.16 23.26 123.90 2804.73 54130.98 23:14:01 186.35 0.20 186.15 23.86 117967.94 23:15:01 357.12 11.71 345.41 785.20 80471.02 23:16:01 16.91 0.28 16.63 13.20 429.36 23:17:01 4.63 0.10 4.53 12.66 130.16 23:18:01 74.69 1.40 73.29 107.45 4186.74 Average: 128.93 7.81 121.12 682.68 43930.22 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 30063060 31724500 2876160 8.73 74800 1892312 1447364 4.26 857748 1719516 223148 23:13:01 29482320 31729156 3456900 10.49 90392 2444792 1357784 3.99 935248 2186248 321440 23:14:01 25780524 31696004 7158696 21.73 137700 5914560 1396360 4.11 988228 5651772 1406332 23:15:01 23211448 29754844 9727772 29.53 157268 6479816 8515768 25.06 3093848 6020320 464 23:16:01 22858032 29406820 10081188 30.61 158644 6481624 8988128 26.45 3464268 5997728 300 23:17:01 22847688 29425920 10091532 30.64 158856 6509772 8806316 25.91 3455484 6013920 27056 23:18:01 25266780 31638164 7672440 23.29 161716 6320184 1532576 4.51 1264860 5853556 2088 Average: 25644265 30767915 7294955 22.15 134197 5149009 4577757 13.47 2008526 4777580 282975 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 ens3 64.51 42.87 978.90 7.69 0.00 0.00 0.00 0.00 23:12:01 lo 1.29 1.29 0.14 0.14 0.00 0.00 0.00 0.00 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 ens3 108.07 75.49 2345.11 9.59 0.00 0.00 0.00 0.00 23:13:01 lo 5.73 5.73 0.54 0.54 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 br-7883eafb062c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 ens3 1116.10 570.74 30113.15 41.32 0.00 0.00 0.00 0.00 23:14:01 lo 8.07 8.07 0.80 0.80 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 br-7883eafb062c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:15:01 ens3 76.30 36.39 2867.95 3.01 0.00 0.00 0.00 0.00 23:15:01 lo 1.13 1.13 0.09 0.09 0.00 0.00 0.00 0.00 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:15:01 vethaaf892a 1.80 1.92 0.18 0.19 0.00 0.00 0.00 0.00 23:16:01 ens3 4.82 4.10 1.02 1.22 0.00 0.00 0.00 0.00 23:16:01 lo 5.98 5.98 3.63 3.63 0.00 0.00 0.00 0.00 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:16:01 vethaaf892a 19.66 15.64 2.26 2.37 0.00 0.00 0.00 0.00 23:17:01 ens3 19.83 17.80 7.62 17.23 0.00 0.00 0.00 0.00 23:17:01 lo 8.53 8.53 0.65 0.65 0.00 0.00 0.00 0.00 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 vethaaf892a 13.93 9.40 1.06 1.34 0.00 0.00 0.00 0.00 23:18:01 ens3 59.22 38.03 69.51 16.88 0.00 0.00 0.00 0.00 23:18:01 lo 0.47 0.47 0.05 0.05 0.00 0.00 0.00 0.00 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 207.31 112.37 5207.55 13.86 0.00 0.00 0.00 0.00 Average: lo 4.46 4.46 0.84 0.84 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14039) 01/21/24 _x86_64_ (8 CPU) 23:10:21 LINUX RESTART (8 CPU) 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 9.82 0.00 0.70 3.32 0.03 86.13 23:12:01 0 12.89 0.00 0.58 4.07 0.07 82.39 23:12:01 1 20.30 0.00 1.12 19.71 0.05 58.82 23:12:01 2 27.13 0.00 1.73 0.85 0.05 70.24 23:12:01 3 11.09 0.00 0.82 0.46 0.07 87.56 23:12:01 4 5.33 0.00 0.66 0.42 0.03 93.55 23:12:01 5 1.20 0.00 0.29 0.31 0.02 98.19 23:12:01 6 0.61 0.00 0.32 0.05 0.02 99.00 23:12:01 7 0.03 0.00 0.07 0.70 0.02 99.19 23:13:01 all 7.99 0.00 1.12 6.20 0.04 84.65 23:13:01 0 8.49 0.00 1.32 3.13 0.03 87.03 23:13:01 1 7.81 0.00 0.89 29.64 0.05 61.61 23:13:01 2 5.26 0.00 0.97 0.43 0.03 93.30 23:13:01 3 2.23 0.00 0.59 1.93 0.07 95.20 23:13:01 4 3.52 0.00 0.77 0.12 0.03 95.56 23:13:01 5 12.84 0.00 1.14 0.84 0.02 85.16 23:13:01 6 16.68 0.00 1.69 7.34 0.03 74.25 23:13:01 7 7.05 0.00 1.59 6.23 0.03 85.09 23:14:01 all 12.22 0.00 5.61 8.68 0.07 73.42 23:14:01 0 11.29 0.00 5.68 13.63 0.07 69.33 23:14:01 1 12.66 0.00 6.48 12.88 0.08 67.89 23:14:01 2 13.53 0.00 5.32 2.47 0.07 78.62 23:14:01 3 11.84 0.00 5.32 0.25 0.05 82.54 23:14:01 4 12.66 0.00 5.06 3.29 0.07 78.92 23:14:01 5 10.47 0.00 5.69 17.64 0.09 66.12 23:14:01 6 12.28 0.00 6.04 18.22 0.10 63.35 23:14:01 7 13.02 0.00 5.32 1.08 0.07 80.51 23:15:01 all 24.56 0.00 4.17 5.47 0.08 65.72 23:15:01 0 25.71 0.00 4.75 0.92 0.08 68.54 23:15:01 1 22.08 0.00 4.04 19.77 0.08 54.03 23:15:01 2 17.86 0.00 3.15 0.80 0.08 78.10 23:15:01 3 29.98 0.00 4.24 3.37 0.12 62.30 23:15:01 4 25.63 0.00 3.66 1.28 0.07 69.37 23:15:01 5 35.78 0.00 5.66 14.57 0.10 43.88 23:15:01 6 20.13 0.00 3.77 2.40 0.07 73.63 23:15:01 7 19.28 0.00 4.06 0.67 0.07 75.91 23:16:01 all 11.31 0.00 1.06 0.05 0.06 87.51 23:16:01 0 9.88 0.00 0.97 0.00 0.05 89.10 23:16:01 1 11.21 0.00 1.07 0.20 0.05 87.47 23:16:01 2 11.81 0.00 1.04 0.05 0.07 87.03 23:16:01 3 10.74 0.00 0.89 0.00 0.05 88.33 23:16:01 4 13.22 0.00 1.34 0.10 0.05 85.29 23:16:01 5 13.20 0.00 1.32 0.02 0.08 85.39 23:16:01 6 10.45 0.00 1.01 0.07 0.07 88.41 23:16:01 7 9.98 0.00 0.87 0.00 0.08 89.07 23:17:01 all 1.33 0.00 0.28 0.02 0.05 98.32 23:17:01 0 2.24 0.00 0.40 0.02 0.07 97.28 23:17:01 1 1.22 0.00 0.23 0.07 0.03 98.45 23:17:01 2 1.35 0.00 0.28 0.05 0.05 98.27 23:17:01 3 0.97 0.00 0.25 0.00 0.05 98.73 23:17:01 4 1.80 0.00 0.23 0.03 0.03 97.90 23:17:01 5 0.84 0.00 0.25 0.02 0.05 98.85 23:17:01 6 1.41 0.00 0.30 0.00 0.07 98.22 23:17:01 7 0.83 0.00 0.30 0.00 0.08 98.78 23:18:01 all 5.68 0.00 0.62 0.50 0.04 93.16 23:18:01 0 2.67 0.00 0.57 0.35 0.02 96.39 23:18:01 1 0.65 0.00 0.57 1.47 0.02 97.30 23:18:01 2 0.84 0.00 0.62 0.65 0.03 97.86 23:18:01 3 5.33 0.00 0.55 0.40 0.03 93.69 23:18:01 4 3.59 0.00 0.37 0.25 0.03 95.76 23:18:01 5 3.49 0.00 0.60 0.27 0.03 95.61 23:18:01 6 1.36 0.00 0.54 0.02 0.03 98.06 23:18:01 7 27.54 0.00 1.17 0.55 0.07 70.67 Average: all 10.40 0.00 1.93 3.45 0.05 84.17 Average: 0 10.43 0.00 2.03 3.14 0.06 84.34 Average: 1 10.80 0.00 2.05 11.92 0.05 75.17 Average: 2 11.06 0.00 1.86 0.75 0.06 86.27 Average: 3 10.30 0.00 1.80 0.92 0.06 86.92 Average: 4 9.37 0.00 1.72 0.78 0.05 88.09 Average: 5 11.12 0.00 2.13 4.78 0.06 81.92 Average: 6 8.99 0.00 1.94 3.99 0.06 85.02 Average: 7 11.12 0.00 1.91 1.32 0.06 85.60