23:10:59 Started by timer 23:10:59 Running as SYSTEM 23:10:59 [EnvInject] - Loading node environment variables. 23:10:59 Building remotely on prd-ubuntu1804-docker-8c-8g-2890 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:59 [ssh-agent] Looking for ssh-agent implementation... 23:10:59 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:59 $ ssh-agent 23:10:59 SSH_AUTH_SOCK=/tmp/ssh-qWQOQHNVAGkV/agent.2116 23:10:59 SSH_AGENT_PID=2118 23:10:59 [ssh-agent] Started. 23:10:59 Running ssh-add (command line suppressed) 23:10:59 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14518375709131756146.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14518375709131756146.key) 23:10:59 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:59 The recommended git tool is: NONE 23:11:01 using credential onap-jenkins-ssh 23:11:01 Wiping out workspace first. 23:11:01 Cloning the remote Git repository 23:11:01 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:01 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:01 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:01 > git --version # timeout=10 23:11:01 > git --version # 'git version 2.17.1' 23:11:01 using GIT_SSH to set credentials Gerrit user 23:11:01 Verifying host key using manually-configured host key entries 23:11:01 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:02 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:02 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:02 Avoid second fetch 23:11:02 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:02 Checking out Revision fbfc234895c48282e2e92b44c8c8b49745e81745 (refs/remotes/origin/master) 23:11:02 > git config core.sparsecheckout # timeout=10 23:11:02 > git checkout -f fbfc234895c48282e2e92b44c8c8b49745e81745 # timeout=30 23:11:02 Commit message: "Improve CSIT helm charts" 23:11:02 > git rev-list --no-walk fbfc234895c48282e2e92b44c8c8b49745e81745 # timeout=10 23:11:02 provisioning config files... 23:11:02 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:02 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:02 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins757468791448122509.sh 23:11:02 ---> python-tools-install.sh 23:11:02 Setup pyenv: 23:11:02 * system (set by /opt/pyenv/version) 23:11:03 * 3.8.13 (set by /opt/pyenv/version) 23:11:03 * 3.9.13 (set by /opt/pyenv/version) 23:11:03 * 3.10.6 (set by /opt/pyenv/version) 23:11:07 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-NYvV 23:11:07 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:10 lf-activate-venv(): INFO: Installing: lftools 23:11:42 lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH 23:11:42 Generating Requirements File 23:12:16 Python 3.10.6 23:12:16 pip 24.0 from /tmp/venv-NYvV/lib/python3.10/site-packages/pip (python 3.10) 23:12:16 appdirs==1.4.4 23:12:16 argcomplete==3.2.2 23:12:16 aspy.yaml==1.3.0 23:12:16 attrs==23.2.0 23:12:16 autopage==0.5.2 23:12:16 beautifulsoup4==4.12.3 23:12:16 boto3==1.34.35 23:12:16 botocore==1.34.35 23:12:16 bs4==0.0.2 23:12:16 cachetools==5.3.2 23:12:16 certifi==2024.2.2 23:12:16 cffi==1.16.0 23:12:16 cfgv==3.4.0 23:12:16 chardet==5.2.0 23:12:16 charset-normalizer==3.3.2 23:12:16 click==8.1.7 23:12:16 cliff==4.5.0 23:12:16 cmd2==2.4.3 23:12:16 cryptography==3.3.2 23:12:16 debtcollector==2.5.0 23:12:16 decorator==5.1.1 23:12:16 defusedxml==0.7.1 23:12:16 Deprecated==1.2.14 23:12:16 distlib==0.3.8 23:12:16 dnspython==2.5.0 23:12:16 docker==4.2.2 23:12:16 dogpile.cache==1.3.0 23:12:16 email-validator==2.1.0.post1 23:12:16 filelock==3.13.1 23:12:16 future==0.18.3 23:12:16 gitdb==4.0.11 23:12:16 GitPython==3.1.41 23:12:16 google-auth==2.27.0 23:12:16 httplib2==0.22.0 23:12:16 identify==2.5.33 23:12:16 idna==3.6 23:12:16 importlib-resources==1.5.0 23:12:16 iso8601==2.1.0 23:12:16 Jinja2==3.1.3 23:12:16 jmespath==1.0.1 23:12:16 jsonpatch==1.33 23:12:16 jsonpointer==2.4 23:12:16 jsonschema==4.21.1 23:12:16 jsonschema-specifications==2023.12.1 23:12:16 keystoneauth1==5.5.0 23:12:16 kubernetes==29.0.0 23:12:16 lftools==0.37.8 23:12:16 lxml==5.1.0 23:12:16 MarkupSafe==2.1.5 23:12:16 msgpack==1.0.7 23:12:16 multi_key_dict==2.0.3 23:12:16 munch==4.0.0 23:12:16 netaddr==0.10.1 23:12:16 netifaces==0.11.0 23:12:16 niet==1.4.2 23:12:16 nodeenv==1.8.0 23:12:16 oauth2client==4.1.3 23:12:16 oauthlib==3.2.2 23:12:16 openstacksdk==0.62.0 23:12:16 os-client-config==2.1.0 23:12:16 os-service-types==1.7.0 23:12:16 osc-lib==3.0.0 23:12:16 oslo.config==9.3.0 23:12:16 oslo.context==5.3.0 23:12:16 oslo.i18n==6.2.0 23:12:16 oslo.log==5.4.0 23:12:16 oslo.serialization==5.3.0 23:12:16 oslo.utils==7.0.0 23:12:16 packaging==23.2 23:12:16 pbr==6.0.0 23:12:16 platformdirs==4.2.0 23:12:16 prettytable==3.9.0 23:12:16 pyasn1==0.5.1 23:12:16 pyasn1-modules==0.3.0 23:12:16 pycparser==2.21 23:12:16 pygerrit2==2.0.15 23:12:16 PyGithub==2.2.0 23:12:16 pyinotify==0.9.6 23:12:16 PyJWT==2.8.0 23:12:16 PyNaCl==1.5.0 23:12:16 pyparsing==2.4.7 23:12:16 pyperclip==1.8.2 23:12:16 pyrsistent==0.20.0 23:12:16 python-cinderclient==9.4.0 23:12:16 python-dateutil==2.8.2 23:12:16 python-heatclient==3.4.0 23:12:16 python-jenkins==1.8.2 23:12:16 python-keystoneclient==5.3.0 23:12:16 python-magnumclient==4.3.0 23:12:16 python-novaclient==18.4.0 23:12:16 python-openstackclient==6.0.0 23:12:16 python-swiftclient==4.4.0 23:12:16 pytz==2024.1 23:12:16 PyYAML==6.0.1 23:12:16 referencing==0.33.0 23:12:16 requests==2.31.0 23:12:16 requests-oauthlib==1.3.1 23:12:16 requestsexceptions==1.4.0 23:12:16 rfc3986==2.0.0 23:12:16 rpds-py==0.17.1 23:12:16 rsa==4.9 23:12:16 ruamel.yaml==0.18.5 23:12:16 ruamel.yaml.clib==0.2.8 23:12:16 s3transfer==0.10.0 23:12:16 simplejson==3.19.2 23:12:16 six==1.16.0 23:12:16 smmap==5.0.1 23:12:16 soupsieve==2.5 23:12:16 stevedore==5.1.0 23:12:16 tabulate==0.9.0 23:12:16 toml==0.10.2 23:12:16 tomlkit==0.12.3 23:12:16 tqdm==4.66.1 23:12:16 typing_extensions==4.9.0 23:12:16 tzdata==2023.4 23:12:16 urllib3==1.26.18 23:12:16 virtualenv==20.25.0 23:12:16 wcwidth==0.2.13 23:12:16 websocket-client==1.7.0 23:12:16 wrapt==1.16.0 23:12:16 xdg==6.0.0 23:12:16 xmltodict==0.13.0 23:12:16 yq==3.2.3 23:12:16 [EnvInject] - Injecting environment variables from a build step. 23:12:16 [EnvInject] - Injecting as environment variables the properties content 23:12:16 SET_JDK_VERSION=openjdk17 23:12:16 GIT_URL="git://cloud.onap.org/mirror" 23:12:16 23:12:16 [EnvInject] - Variables injected successfully. 23:12:16 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins10399999204259747661.sh 23:12:16 ---> update-java-alternatives.sh 23:12:16 ---> Updating Java version 23:12:17 ---> Ubuntu/Debian system detected 23:12:17 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:17 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:17 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:17 openjdk version "17.0.4" 2022-07-19 23:12:17 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:17 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:17 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:17 [EnvInject] - Injecting environment variables from a build step. 23:12:17 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:17 [EnvInject] - Variables injected successfully. 23:12:17 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins12334347158713918464.sh 23:12:17 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:17 + set +u 23:12:17 + save_set 23:12:17 + RUN_CSIT_SAVE_SET=ehxB 23:12:17 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:17 + '[' 1 -eq 0 ']' 23:12:17 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:17 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:17 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:17 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:17 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:17 + export ROBOT_VARIABLES= 23:12:17 + ROBOT_VARIABLES= 23:12:17 + export PROJECT=pap 23:12:17 + PROJECT=pap 23:12:17 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:17 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:17 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:17 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:17 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:17 + relax_set 23:12:17 + set +e 23:12:17 + set +o pipefail 23:12:17 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:17 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:17 +++ mktemp -d 23:12:17 ++ ROBOT_VENV=/tmp/tmp.Vx3ZNVXuq2 23:12:17 ++ echo ROBOT_VENV=/tmp/tmp.Vx3ZNVXuq2 23:12:17 +++ python3 --version 23:12:17 ++ echo 'Python version is: Python 3.6.9' 23:12:17 Python version is: Python 3.6.9 23:12:17 ++ python3 -m venv --clear /tmp/tmp.Vx3ZNVXuq2 23:12:18 ++ source /tmp/tmp.Vx3ZNVXuq2/bin/activate 23:12:18 +++ deactivate nondestructive 23:12:18 +++ '[' -n '' ']' 23:12:18 +++ '[' -n '' ']' 23:12:18 +++ '[' -n /bin/bash -o -n '' ']' 23:12:18 +++ hash -r 23:12:18 +++ '[' -n '' ']' 23:12:18 +++ unset VIRTUAL_ENV 23:12:18 +++ '[' '!' nondestructive = nondestructive ']' 23:12:18 +++ VIRTUAL_ENV=/tmp/tmp.Vx3ZNVXuq2 23:12:18 +++ export VIRTUAL_ENV 23:12:18 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:18 +++ PATH=/tmp/tmp.Vx3ZNVXuq2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:18 +++ export PATH 23:12:18 +++ '[' -n '' ']' 23:12:18 +++ '[' -z '' ']' 23:12:18 +++ _OLD_VIRTUAL_PS1= 23:12:18 +++ '[' 'x(tmp.Vx3ZNVXuq2) ' '!=' x ']' 23:12:18 +++ PS1='(tmp.Vx3ZNVXuq2) ' 23:12:18 +++ export PS1 23:12:18 +++ '[' -n /bin/bash -o -n '' ']' 23:12:18 +++ hash -r 23:12:18 ++ set -exu 23:12:18 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:22 ++ echo 'Installing Python Requirements' 23:12:22 Installing Python Requirements 23:12:22 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:39 ++ python3 -m pip -qq freeze 23:12:39 bcrypt==4.0.1 23:12:39 beautifulsoup4==4.12.3 23:12:39 bitarray==2.9.2 23:12:39 certifi==2024.2.2 23:12:39 cffi==1.15.1 23:12:39 charset-normalizer==2.0.12 23:12:39 cryptography==40.0.2 23:12:39 decorator==5.1.1 23:12:39 elasticsearch==7.17.9 23:12:39 elasticsearch-dsl==7.4.1 23:12:39 enum34==1.1.10 23:12:39 idna==3.6 23:12:39 importlib-resources==5.4.0 23:12:39 ipaddr==2.2.0 23:12:39 isodate==0.6.1 23:12:39 jmespath==0.10.0 23:12:39 jsonpatch==1.32 23:12:39 jsonpath-rw==1.4.0 23:12:39 jsonpointer==2.3 23:12:39 lxml==5.1.0 23:12:39 netaddr==0.8.0 23:12:39 netifaces==0.11.0 23:12:39 odltools==0.1.28 23:12:39 paramiko==3.4.0 23:12:39 pkg_resources==0.0.0 23:12:39 ply==3.11 23:12:39 pyang==2.6.0 23:12:39 pyangbind==0.8.1 23:12:39 pycparser==2.21 23:12:39 pyhocon==0.3.60 23:12:39 PyNaCl==1.5.0 23:12:39 pyparsing==3.1.1 23:12:39 python-dateutil==2.8.2 23:12:39 regex==2023.8.8 23:12:39 requests==2.27.1 23:12:39 robotframework==6.1.1 23:12:39 robotframework-httplibrary==0.4.2 23:12:39 robotframework-pythonlibcore==3.0.0 23:12:39 robotframework-requests==0.9.4 23:12:39 robotframework-selenium2library==3.0.0 23:12:39 robotframework-seleniumlibrary==5.1.3 23:12:39 robotframework-sshlibrary==3.8.0 23:12:39 scapy==2.5.0 23:12:39 scp==0.14.5 23:12:39 selenium==3.141.0 23:12:39 six==1.16.0 23:12:39 soupsieve==2.3.2.post1 23:12:39 urllib3==1.26.18 23:12:39 waitress==2.0.0 23:12:39 WebOb==1.8.7 23:12:39 WebTest==3.0.0 23:12:39 zipp==3.6.0 23:12:39 ++ mkdir -p /tmp/tmp.Vx3ZNVXuq2/src/onap 23:12:39 ++ rm -rf /tmp/tmp.Vx3ZNVXuq2/src/onap/testsuite 23:12:39 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:45 ++ echo 'Installing python confluent-kafka library' 23:12:45 Installing python confluent-kafka library 23:12:45 ++ python3 -m pip install -qq confluent-kafka 23:12:47 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:47 Uninstall docker-py and reinstall docker. 23:12:47 ++ python3 -m pip uninstall -y -qq docker 23:12:47 ++ python3 -m pip install -U -qq docker 23:12:48 ++ python3 -m pip -qq freeze 23:12:49 bcrypt==4.0.1 23:12:49 beautifulsoup4==4.12.3 23:12:49 bitarray==2.9.2 23:12:49 certifi==2024.2.2 23:12:49 cffi==1.15.1 23:12:49 charset-normalizer==2.0.12 23:12:49 confluent-kafka==2.3.0 23:12:49 cryptography==40.0.2 23:12:49 decorator==5.1.1 23:12:49 deepdiff==5.7.0 23:12:49 dnspython==2.2.1 23:12:49 docker==5.0.3 23:12:49 elasticsearch==7.17.9 23:12:49 elasticsearch-dsl==7.4.1 23:12:49 enum34==1.1.10 23:12:49 future==0.18.3 23:12:49 idna==3.6 23:12:49 importlib-resources==5.4.0 23:12:49 ipaddr==2.2.0 23:12:49 isodate==0.6.1 23:12:49 Jinja2==3.0.3 23:12:49 jmespath==0.10.0 23:12:49 jsonpatch==1.32 23:12:49 jsonpath-rw==1.4.0 23:12:49 jsonpointer==2.3 23:12:49 kafka-python==2.0.2 23:12:49 lxml==5.1.0 23:12:49 MarkupSafe==2.0.1 23:12:49 more-itertools==5.0.0 23:12:49 netaddr==0.8.0 23:12:49 netifaces==0.11.0 23:12:49 odltools==0.1.28 23:12:49 ordered-set==4.0.2 23:12:49 paramiko==3.4.0 23:12:49 pbr==6.0.0 23:12:49 pkg_resources==0.0.0 23:12:49 ply==3.11 23:12:49 protobuf==3.19.6 23:12:49 pyang==2.6.0 23:12:49 pyangbind==0.8.1 23:12:49 pycparser==2.21 23:12:49 pyhocon==0.3.60 23:12:49 PyNaCl==1.5.0 23:12:49 pyparsing==3.1.1 23:12:49 python-dateutil==2.8.2 23:12:49 PyYAML==6.0.1 23:12:49 regex==2023.8.8 23:12:49 requests==2.27.1 23:12:49 robotframework==6.1.1 23:12:49 robotframework-httplibrary==0.4.2 23:12:49 robotframework-onap==0.6.0.dev105 23:12:49 robotframework-pythonlibcore==3.0.0 23:12:49 robotframework-requests==0.9.4 23:12:49 robotframework-selenium2library==3.0.0 23:12:49 robotframework-seleniumlibrary==5.1.3 23:12:49 robotframework-sshlibrary==3.8.0 23:12:49 robotlibcore-temp==1.0.2 23:12:49 scapy==2.5.0 23:12:49 scp==0.14.5 23:12:49 selenium==3.141.0 23:12:49 six==1.16.0 23:12:49 soupsieve==2.3.2.post1 23:12:49 urllib3==1.26.18 23:12:49 waitress==2.0.0 23:12:49 WebOb==1.8.7 23:12:49 websocket-client==1.3.1 23:12:49 WebTest==3.0.0 23:12:49 zipp==3.6.0 23:12:49 ++ uname 23:12:49 ++ grep -q Linux 23:12:49 ++ sudo apt-get -y -qq install libxml2-utils 23:12:49 + load_set 23:12:49 + _setopts=ehuxB 23:12:49 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:49 ++ tr : ' ' 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o braceexpand 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o hashall 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o interactive-comments 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o nounset 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o xtrace 23:12:49 ++ echo ehuxB 23:12:49 ++ sed 's/./& /g' 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +e 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +h 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +u 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +x 23:12:49 + source_safely /tmp/tmp.Vx3ZNVXuq2/bin/activate 23:12:49 + '[' -z /tmp/tmp.Vx3ZNVXuq2/bin/activate ']' 23:12:49 + relax_set 23:12:49 + set +e 23:12:49 + set +o pipefail 23:12:49 + . /tmp/tmp.Vx3ZNVXuq2/bin/activate 23:12:49 ++ deactivate nondestructive 23:12:49 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:49 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:49 ++ export PATH 23:12:49 ++ unset _OLD_VIRTUAL_PATH 23:12:49 ++ '[' -n '' ']' 23:12:49 ++ '[' -n /bin/bash -o -n '' ']' 23:12:49 ++ hash -r 23:12:49 ++ '[' -n '' ']' 23:12:49 ++ unset VIRTUAL_ENV 23:12:49 ++ '[' '!' nondestructive = nondestructive ']' 23:12:49 ++ VIRTUAL_ENV=/tmp/tmp.Vx3ZNVXuq2 23:12:49 ++ export VIRTUAL_ENV 23:12:49 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:49 ++ PATH=/tmp/tmp.Vx3ZNVXuq2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:49 ++ export PATH 23:12:49 ++ '[' -n '' ']' 23:12:49 ++ '[' -z '' ']' 23:12:49 ++ _OLD_VIRTUAL_PS1='(tmp.Vx3ZNVXuq2) ' 23:12:49 ++ '[' 'x(tmp.Vx3ZNVXuq2) ' '!=' x ']' 23:12:49 ++ PS1='(tmp.Vx3ZNVXuq2) (tmp.Vx3ZNVXuq2) ' 23:12:49 ++ export PS1 23:12:49 ++ '[' -n /bin/bash -o -n '' ']' 23:12:49 ++ hash -r 23:12:49 + load_set 23:12:49 + _setopts=hxB 23:12:49 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:49 ++ tr : ' ' 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o braceexpand 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o hashall 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o interactive-comments 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o xtrace 23:12:49 ++ echo hxB 23:12:49 ++ sed 's/./& /g' 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +h 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +x 23:12:49 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:49 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:49 + export TEST_OPTIONS= 23:12:49 + TEST_OPTIONS= 23:12:49 ++ mktemp -d 23:12:49 + WORKDIR=/tmp/tmp.Hjz3EwQKXg 23:12:49 + cd /tmp/tmp.Hjz3EwQKXg 23:12:49 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:49 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:49 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:49 Configure a credential helper to remove this warning. See 23:12:49 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:49 23:12:49 Login Succeeded 23:12:49 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:49 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:49 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:49 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:49 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:49 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:49 + relax_set 23:12:49 + set +e 23:12:49 + set +o pipefail 23:12:49 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:49 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:49 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:49 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:49 +++ GERRIT_BRANCH=master 23:12:49 +++ echo GERRIT_BRANCH=master 23:12:49 GERRIT_BRANCH=master 23:12:49 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:49 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:49 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:49 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:50 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:50 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:50 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:50 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:50 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:50 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:50 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:50 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:50 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:50 +++ grafana=false 23:12:50 +++ gui=false 23:12:50 +++ [[ 2 -gt 0 ]] 23:12:50 +++ key=apex-pdp 23:12:50 +++ case $key in 23:12:50 +++ echo apex-pdp 23:12:50 apex-pdp 23:12:50 +++ component=apex-pdp 23:12:50 +++ shift 23:12:50 +++ [[ 1 -gt 0 ]] 23:12:50 +++ key=--grafana 23:12:50 +++ case $key in 23:12:50 +++ grafana=true 23:12:50 +++ shift 23:12:50 +++ [[ 0 -gt 0 ]] 23:12:50 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:50 +++ echo 'Configuring docker compose...' 23:12:50 Configuring docker compose... 23:12:50 +++ source export-ports.sh 23:12:50 +++ source get-versions.sh 23:12:52 +++ '[' -z pap ']' 23:12:52 +++ '[' -n apex-pdp ']' 23:12:52 +++ '[' apex-pdp == logs ']' 23:12:52 +++ '[' true = true ']' 23:12:52 +++ echo 'Starting apex-pdp application with Grafana' 23:12:52 Starting apex-pdp application with Grafana 23:12:52 +++ docker-compose up -d apex-pdp grafana 23:12:53 Creating network "compose_default" with the default driver 23:12:53 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:53 latest: Pulling from prom/prometheus 23:12:56 Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 23:12:56 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:56 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:56 latest: Pulling from grafana/grafana 23:13:01 Digest: sha256:7567a7c70a3c1d75aeeedc968d1304174a16651e55a60d1fb132a05e1e63a054 23:13:01 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:01 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:01 10.10.2: Pulling from mariadb 23:13:07 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:07 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:07 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 23:13:07 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:11 Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 23:13:11 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT 23:13:11 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:11 latest: Pulling from confluentinc/cp-zookeeper 23:13:22 Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 23:13:22 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:22 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:23 latest: Pulling from confluentinc/cp-kafka 23:13:26 Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 23:13:26 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:26 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 23:13:26 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:33 Digest: sha256:bedafcd670058dc2d485934eb404bb04ce1a30b23cf7a567427a60ae561f25c7 23:13:33 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT 23:13:33 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 23:13:33 3.1.1-SNAPSHOT: Pulling from onap/policy-api 23:13:36 Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e 23:13:36 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT 23:13:36 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 23:13:36 3.1.1-SNAPSHOT: Pulling from onap/policy-pap 23:13:46 Digest: sha256:8a0432281bb5edb6d25e3d0e62d78b6aebc2875f52ecd11259251b497208c04e 23:13:46 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT 23:13:46 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 23:13:46 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:14:02 Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b 23:14:02 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT 23:14:02 Creating prometheus ... 23:14:02 Creating mariadb ... 23:14:02 Creating compose_zookeeper_1 ... 23:14:02 Creating simulator ... 23:14:15 Creating compose_zookeeper_1 ... done 23:14:15 Creating kafka ... 23:14:16 Creating kafka ... done 23:14:17 Creating prometheus ... done 23:14:17 Creating grafana ... 23:14:18 Creating grafana ... done 23:14:19 Creating mariadb ... done 23:14:19 Creating policy-db-migrator ... 23:14:20 Creating policy-db-migrator ... done 23:14:20 Creating policy-api ... 23:14:21 Creating policy-api ... done 23:14:21 Creating policy-pap ... 23:14:22 Creating policy-pap ... done 23:14:23 Creating simulator ... done 23:14:23 Creating policy-apex-pdp ... 23:14:24 Creating policy-apex-pdp ... done 23:14:24 +++ echo 'Prometheus server: http://localhost:30259' 23:14:24 Prometheus server: http://localhost:30259 23:14:24 +++ echo 'Grafana server: http://localhost:30269' 23:14:24 Grafana server: http://localhost:30269 23:14:24 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:24 ++ sleep 10 23:14:34 ++ unset http_proxy https_proxy 23:14:34 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:34 Waiting for REST to come up on localhost port 30003... 23:14:34 NAMES STATUS 23:14:34 policy-apex-pdp Up 10 seconds 23:14:34 policy-pap Up 12 seconds 23:14:34 policy-api Up 13 seconds 23:14:34 grafana Up 16 seconds 23:14:34 kafka Up 18 seconds 23:14:34 simulator Up 11 seconds 23:14:34 prometheus Up 17 seconds 23:14:34 compose_zookeeper_1 Up 19 seconds 23:14:34 mariadb Up 15 seconds 23:14:39 NAMES STATUS 23:14:39 policy-apex-pdp Up 15 seconds 23:14:39 policy-pap Up 17 seconds 23:14:39 policy-api Up 18 seconds 23:14:39 grafana Up 21 seconds 23:14:39 kafka Up 23 seconds 23:14:39 simulator Up 16 seconds 23:14:39 prometheus Up 22 seconds 23:14:39 compose_zookeeper_1 Up 24 seconds 23:14:39 mariadb Up 20 seconds 23:14:44 NAMES STATUS 23:14:44 policy-apex-pdp Up 20 seconds 23:14:44 policy-pap Up 22 seconds 23:14:44 policy-api Up 23 seconds 23:14:44 grafana Up 26 seconds 23:14:44 kafka Up 28 seconds 23:14:44 simulator Up 21 seconds 23:14:44 prometheus Up 27 seconds 23:14:44 compose_zookeeper_1 Up 29 seconds 23:14:44 mariadb Up 25 seconds 23:14:49 NAMES STATUS 23:14:49 policy-apex-pdp Up 25 seconds 23:14:49 policy-pap Up 27 seconds 23:14:49 policy-api Up 28 seconds 23:14:49 grafana Up 31 seconds 23:14:49 kafka Up 33 seconds 23:14:49 simulator Up 26 seconds 23:14:49 prometheus Up 32 seconds 23:14:49 compose_zookeeper_1 Up 34 seconds 23:14:49 mariadb Up 30 seconds 23:14:54 NAMES STATUS 23:14:54 policy-apex-pdp Up 30 seconds 23:14:54 policy-pap Up 32 seconds 23:14:54 policy-api Up 33 seconds 23:14:54 grafana Up 36 seconds 23:14:54 kafka Up 38 seconds 23:14:54 simulator Up 31 seconds 23:14:54 prometheus Up 37 seconds 23:14:54 compose_zookeeper_1 Up 39 seconds 23:14:54 mariadb Up 35 seconds 23:14:59 NAMES STATUS 23:14:59 policy-apex-pdp Up 35 seconds 23:14:59 policy-pap Up 37 seconds 23:14:59 policy-api Up 38 seconds 23:14:59 grafana Up 41 seconds 23:14:59 kafka Up 43 seconds 23:14:59 simulator Up 36 seconds 23:14:59 prometheus Up 42 seconds 23:14:59 compose_zookeeper_1 Up 44 seconds 23:14:59 mariadb Up 40 seconds 23:15:00 ++ export 'SUITES=pap-test.robot 23:15:00 pap-slas.robot' 23:15:00 ++ SUITES='pap-test.robot 23:15:00 pap-slas.robot' 23:15:00 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:00 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:00 + load_set 23:15:00 + _setopts=hxB 23:15:00 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:15:00 ++ tr : ' ' 23:15:00 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:00 + set +o braceexpand 23:15:00 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:00 + set +o hashall 23:15:00 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:00 + set +o interactive-comments 23:15:00 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:00 + set +o xtrace 23:15:00 ++ echo hxB 23:15:00 ++ sed 's/./& /g' 23:15:00 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:00 + set +h 23:15:00 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:00 + set +x 23:15:00 + docker_stats 23:15:00 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:15:00 ++ uname -s 23:15:00 + '[' Linux == Darwin ']' 23:15:00 + sh -c 'top -bn1 | head -3' 23:15:00 top - 23:15:00 up 4 min, 0 users, load average: 2.91, 1.35, 0.54 23:15:00 Tasks: 207 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:15:00 %Cpu(s): 13.3 us, 2.8 sy, 0.0 ni, 79.6 id, 4.2 wa, 0.0 hi, 0.1 si, 0.1 st 23:15:00 + echo 23:15:00 + sh -c 'free -h' 23:15:00 23:15:00 total used free shared buff/cache available 23:15:00 Mem: 31G 2.7G 22G 1.3M 6.7G 28G 23:15:00 Swap: 1.0G 0B 1.0G 23:15:00 + echo 23:15:00 23:15:00 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:15:00 NAMES STATUS 23:15:00 policy-apex-pdp Up 35 seconds 23:15:00 policy-pap Up 37 seconds 23:15:00 policy-api Up 39 seconds 23:15:00 grafana Up 42 seconds 23:15:00 kafka Up 44 seconds 23:15:00 simulator Up 36 seconds 23:15:00 prometheus Up 42 seconds 23:15:00 compose_zookeeper_1 Up 44 seconds 23:15:00 mariadb Up 40 seconds 23:15:00 + echo 23:15:00 23:15:00 + docker stats --no-stream 23:15:02 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:15:02 ee1f1738261a policy-apex-pdp 8.36% 182.6MiB / 31.41GiB 0.57% 9.03kB / 8.49kB 0B / 0B 48 23:15:02 b1b7d7896699 policy-pap 29.15% 504.8MiB / 31.41GiB 1.57% 28.8kB / 30.6kB 0B / 181MB 63 23:15:02 5fd5934147ab policy-api 0.12% 504.9MiB / 31.41GiB 1.57% 1e+03kB / 711kB 0B / 0B 55 23:15:02 bd4f8fba0e1a grafana 0.04% 57.91MiB / 31.41GiB 0.18% 19.5kB / 3.4kB 0B / 24MB 17 23:15:02 9f872bbe5af4 kafka 1.41% 364.9MiB / 31.41GiB 1.13% 64.5kB / 67.3kB 0B / 475kB 81 23:15:02 879620c6b816 simulator 0.08% 123.8MiB / 31.41GiB 0.38% 1.15kB / 0B 0B / 0B 76 23:15:02 a487037c08b5 prometheus 0.00% 18.45MiB / 31.41GiB 0.06% 1.64kB / 474B 98.3kB / 0B 12 23:15:02 1d8b3d85a938 compose_zookeeper_1 0.10% 99.92MiB / 31.41GiB 0.31% 53.1kB / 46.7kB 131kB / 385kB 60 23:15:02 43559b1a61a2 mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 997kB / 1.18MB 11MB / 46.3MB 37 23:15:02 + echo 23:15:02 23:15:02 + cd /tmp/tmp.Hjz3EwQKXg 23:15:02 + echo 'Reading the testplan:' 23:15:02 Reading the testplan: 23:15:02 + echo 'pap-test.robot 23:15:02 pap-slas.robot' 23:15:02 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:15:02 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:15:02 + cat testplan.txt 23:15:02 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:15:02 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:02 ++ xargs 23:15:02 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:15:02 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:02 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:02 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:02 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:15:02 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:15:02 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:15:02 + relax_set 23:15:02 + set +e 23:15:02 + set +o pipefail 23:15:02 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:03 ============================================================================== 23:15:03 pap 23:15:03 ============================================================================== 23:15:03 pap.Pap-Test 23:15:03 ============================================================================== 23:15:04 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:15:04 ------------------------------------------------------------------------------ 23:15:04 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:15:04 ------------------------------------------------------------------------------ 23:15:05 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:15:05 ------------------------------------------------------------------------------ 23:15:05 Healthcheck :: Verify policy pap health check | PASS | 23:15:05 ------------------------------------------------------------------------------ 23:15:25 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:25 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:26 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:26 ------------------------------------------------------------------------------ 23:15:26 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:26 ------------------------------------------------------------------------------ 23:15:26 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:26 ------------------------------------------------------------------------------ 23:15:27 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:27 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:27 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:27 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:27 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:28 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:28 ------------------------------------------------------------------------------ 23:15:28 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:28 ------------------------------------------------------------------------------ 23:15:28 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:28 ------------------------------------------------------------------------------ 23:15:48 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:48 ------------------------------------------------------------------------------ 23:15:48 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:48 ------------------------------------------------------------------------------ 23:15:48 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:48 ------------------------------------------------------------------------------ 23:15:49 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:49 ------------------------------------------------------------------------------ 23:15:49 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:49 ------------------------------------------------------------------------------ 23:15:49 pap.Pap-Test | PASS | 23:15:49 22 tests, 22 passed, 0 failed 23:15:49 ============================================================================== 23:15:49 pap.Pap-Slas 23:15:49 ============================================================================== 23:16:49 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:49 ------------------------------------------------------------------------------ 23:16:49 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:49 ------------------------------------------------------------------------------ 23:16:49 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:49 ------------------------------------------------------------------------------ 23:16:49 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:49 ------------------------------------------------------------------------------ 23:16:49 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:49 ------------------------------------------------------------------------------ 23:16:49 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:49 ------------------------------------------------------------------------------ 23:16:49 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:49 ------------------------------------------------------------------------------ 23:16:49 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:49 ------------------------------------------------------------------------------ 23:16:49 pap.Pap-Slas | PASS | 23:16:49 8 tests, 8 passed, 0 failed 23:16:49 ============================================================================== 23:16:49 pap | PASS | 23:16:49 30 tests, 30 passed, 0 failed 23:16:49 ============================================================================== 23:16:49 Output: /tmp/tmp.Hjz3EwQKXg/output.xml 23:16:49 Log: /tmp/tmp.Hjz3EwQKXg/log.html 23:16:49 Report: /tmp/tmp.Hjz3EwQKXg/report.html 23:16:49 + RESULT=0 23:16:49 + load_set 23:16:49 + _setopts=hxB 23:16:49 ++ tr : ' ' 23:16:49 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:49 + set +o braceexpand 23:16:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:49 + set +o hashall 23:16:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:49 + set +o interactive-comments 23:16:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:49 + set +o xtrace 23:16:49 ++ echo hxB 23:16:49 ++ sed 's/./& /g' 23:16:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:49 + set +h 23:16:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:49 + set +x 23:16:49 + echo 'RESULT: 0' 23:16:49 RESULT: 0 23:16:49 + exit 0 23:16:49 + on_exit 23:16:49 + rc=0 23:16:49 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:49 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:49 NAMES STATUS 23:16:49 policy-apex-pdp Up 2 minutes 23:16:49 policy-pap Up 2 minutes 23:16:49 policy-api Up 2 minutes 23:16:49 grafana Up 2 minutes 23:16:49 kafka Up 2 minutes 23:16:49 simulator Up 2 minutes 23:16:49 prometheus Up 2 minutes 23:16:49 compose_zookeeper_1 Up 2 minutes 23:16:49 mariadb Up 2 minutes 23:16:49 + docker_stats 23:16:49 ++ uname -s 23:16:49 + '[' Linux == Darwin ']' 23:16:49 + sh -c 'top -bn1 | head -3' 23:16:49 top - 23:16:49 up 6 min, 0 users, load average: 0.60, 1.02, 0.51 23:16:49 Tasks: 195 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:49 %Cpu(s): 10.7 us, 2.1 sy, 0.0 ni, 83.8 id, 3.3 wa, 0.0 hi, 0.0 si, 0.1 st 23:16:49 + echo 23:16:49 23:16:49 + sh -c 'free -h' 23:16:49 total used free shared buff/cache available 23:16:49 Mem: 31G 2.7G 21G 1.3M 6.7G 28G 23:16:49 Swap: 1.0G 0B 1.0G 23:16:49 + echo 23:16:49 23:16:49 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:49 NAMES STATUS 23:16:49 policy-apex-pdp Up 2 minutes 23:16:49 policy-pap Up 2 minutes 23:16:49 policy-api Up 2 minutes 23:16:49 grafana Up 2 minutes 23:16:49 kafka Up 2 minutes 23:16:49 simulator Up 2 minutes 23:16:49 prometheus Up 2 minutes 23:16:49 compose_zookeeper_1 Up 2 minutes 23:16:49 mariadb Up 2 minutes 23:16:49 + echo 23:16:49 23:16:49 + docker stats --no-stream 23:16:52 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:52 ee1f1738261a policy-apex-pdp 0.40% 181.6MiB / 31.41GiB 0.56% 56.3kB / 80.3kB 0B / 0B 50 23:16:52 b1b7d7896699 policy-pap 1.09% 498.6MiB / 31.41GiB 1.55% 2.33MB / 815kB 0B / 181MB 65 23:16:52 5fd5934147ab policy-api 0.10% 543.3MiB / 31.41GiB 1.69% 2.49MB / 1.27MB 0B / 0B 56 23:16:52 bd4f8fba0e1a grafana 0.02% 65.58MiB / 31.41GiB 0.20% 20.5kB / 4.49kB 0B / 24MB 17 23:16:52 9f872bbe5af4 kafka 11.57% 388.6MiB / 31.41GiB 1.21% 236kB / 212kB 0B / 582kB 83 23:16:52 879620c6b816 simulator 0.07% 123.8MiB / 31.41GiB 0.38% 1.37kB / 0B 0B / 0B 76 23:16:52 a487037c08b5 prometheus 0.00% 24.73MiB / 31.41GiB 0.08% 180kB / 10.2kB 98.3kB / 0B 12 23:16:52 1d8b3d85a938 compose_zookeeper_1 0.09% 100MiB / 31.41GiB 0.31% 56kB / 48.3kB 131kB / 385kB 60 23:16:52 43559b1a61a2 mariadb 0.01% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 46.7MB 28 23:16:52 + echo 23:16:52 23:16:52 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:52 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:52 + relax_set 23:16:52 + set +e 23:16:52 + set +o pipefail 23:16:52 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:52 ++ echo 'Shut down started!' 23:16:52 Shut down started! 23:16:52 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:52 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:52 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:52 ++ source export-ports.sh 23:16:52 ++ source get-versions.sh 23:16:54 ++ echo 'Collecting logs from docker compose containers...' 23:16:54 Collecting logs from docker compose containers... 23:16:54 ++ docker-compose logs 23:16:56 ++ cat docker_compose.log 23:16:56 Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, grafana, kafka, simulator, prometheus, compose_zookeeper_1, mariadb 23:16:56 zookeeper_1 | ===> User 23:16:56 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:56 zookeeper_1 | ===> Configuring ... 23:16:56 zookeeper_1 | ===> Running preflight checks ... 23:16:56 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:56 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:56 zookeeper_1 | ===> Launching ... 23:16:56 zookeeper_1 | ===> Launching zookeeper ... 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,670] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,678] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,678] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,678] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,678] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,679] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,679] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,680] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,680] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,681] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,681] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,682] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,682] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,682] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,682] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,682] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,697] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,701] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,701] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,703] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,712] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:host.name=1d8b3d85a938 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,713] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,714] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,715] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,716] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,716] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,717] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,717] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,717] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,717] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,717] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,718] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,718] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,718] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,720] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,720] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,720] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,720] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,720] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,738] INFO Logging initialized @496ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,820] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,820] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,838] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,870] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,870] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,872] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,876] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,886] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,898] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,898] INFO Started @656ms (org.eclipse.jetty.server.Server) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,898] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,902] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,902] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,904] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,905] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,920] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,921] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,922] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,922] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,926] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,926] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,928] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,929] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,930] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,937] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,937] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,955] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:56 zookeeper_1 | [2024-02-05 23:14:18,956] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:56 zookeeper_1 | [2024-02-05 23:14:20,174] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:56 mariadb | 2024-02-05 23:14:19+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:56 mariadb | 2024-02-05 23:14:19+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:56 mariadb | 2024-02-05 23:14:19+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:56 mariadb | 2024-02-05 23:14:19+00:00 [Note] [Entrypoint]: Initializing database files 23:16:56 mariadb | 2024-02-05 23:14:19 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:56 mariadb | 2024-02-05 23:14:19 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:56 mariadb | 2024-02-05 23:14:19 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:56 mariadb | 23:16:56 mariadb | 23:16:56 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:56 mariadb | To do so, start the server, then issue the following command: 23:16:56 mariadb | 23:16:56 mariadb | '/usr/bin/mysql_secure_installation' 23:16:56 mariadb | 23:16:56 mariadb | which will also give you the option of removing the test 23:16:56 mariadb | databases and anonymous user created by default. This is 23:16:56 mariadb | strongly recommended for production servers. 23:16:56 mariadb | 23:16:56 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:56 mariadb | 23:16:56 mariadb | Please report any problems at https://mariadb.org/jira 23:16:56 mariadb | 23:16:56 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:56 mariadb | 23:16:56 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:56 mariadb | https://mariadb.org/get-involved/ 23:16:56 mariadb | 23:16:56 mariadb | 2024-02-05 23:14:21+00:00 [Note] [Entrypoint]: Database files initialized 23:16:56 mariadb | 2024-02-05 23:14:21+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:56 mariadb | 2024-02-05 23:14:21+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Number of transaction pools: 1 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: 128 rollback segments are active. 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:56 mariadb | 2024-02-05 23:14:21 0 [Note] mariadbd: ready for connections. 23:16:56 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:56 mariadb | 2024-02-05 23:14:22+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:56 mariadb | 2024-02-05 23:14:24+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:56 mariadb | 2024-02-05 23:14:24+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:56 mariadb | 23:16:56 mariadb | 2024-02-05 23:14:24+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:56 mariadb | 23:16:56 mariadb | 2024-02-05 23:14:24+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:56 mariadb | #!/bin/bash -xv 23:16:56 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:56 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:56 mariadb | # 23:16:56 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:56 mariadb | # you may not use this file except in compliance with the License. 23:16:56 mariadb | # You may obtain a copy of the License at 23:16:56 mariadb | # 23:16:56 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:56 mariadb | # 23:16:56 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:56 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:56 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:56 mariadb | # See the License for the specific language governing permissions and 23:16:56 mariadb | # limitations under the License. 23:16:56 mariadb | 23:16:56 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:56 mariadb | do 23:16:56 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:56 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:56 mariadb | done 23:16:56 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:56 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:56 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:56 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:56 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:56 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:56 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:56 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:56 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:56 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:56 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:56 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:56 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:56 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:56 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:56 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:56 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:56 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:56 mariadb | 23:16:56 kafka | ===> User 23:16:56 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:56 kafka | ===> Configuring ... 23:16:56 kafka | Running in Zookeeper mode... 23:16:56 kafka | ===> Running preflight checks ... 23:16:56 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:56 kafka | ===> Check if Zookeeper is healthy ... 23:16:56 kafka | [2024-02-05 23:14:20,110] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,111] INFO Client environment:host.name=9f872bbe5af4 (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,111] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,111] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,111] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,111] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,112] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,115] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,118] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:56 kafka | [2024-02-05 23:14:20,123] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:56 kafka | [2024-02-05 23:14:20,130] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:20,143] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:20,147] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:20,157] INFO Socket connection established, initiating session, client: /172.17.0.6:33642, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:20,193] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003b5f80000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:20,325] INFO Session: 0x1000003b5f80000 closed (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:20,326] INFO EventThread shut down for session: 0x1000003b5f80000 (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:56 kafka | ===> Launching ... 23:16:56 kafka | ===> Launching kafka ... 23:16:56 kafka | [2024-02-05 23:14:21,003] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:56 kafka | [2024-02-05 23:14:21,321] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:56 kafka | [2024-02-05 23:14:21,395] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:56 kafka | [2024-02-05 23:14:21,397] INFO starting (kafka.server.KafkaServer) 23:16:56 kafka | [2024-02-05 23:14:21,397] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:56 kafka | [2024-02-05 23:14:21,410] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.404997351Z level=info msg="Starting Grafana" version=10.3.1 commit=00a22ff8b28550d593ec369ba3da1b25780f0a4a branch=HEAD compiled=2024-01-22T18:40:42Z 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405266192Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405281675Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405310942Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405323755Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405327346Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405353322Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405359583Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405367235Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405373546Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405377777Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405381018Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405384389Z level=info msg=Target target=[all] 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.40539143Z level=info msg="Path Home" path=/usr/share/grafana 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405394401Z level=info msg="Path Data" path=/var/lib/grafana 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405398762Z level=info msg="Path Logs" path=/var/log/grafana 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405401643Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405405433Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:56 grafana | logger=settings t=2024-02-05T23:14:18.405438731Z level=info msg="App mode production" 23:16:56 grafana | logger=sqlstore t=2024-02-05T23:14:18.405774937Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:56 grafana | logger=sqlstore t=2024-02-05T23:14:18.405796622Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.406499842Z level=info msg="Starting DB migrations" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.407507911Z level=info msg="Executing migration" id="create migration_log table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.408324616Z level=info msg="Migration successfully executed" id="create migration_log table" duration=816.344µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.414372199Z level=info msg="Executing migration" id="create user table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.415202287Z level=info msg="Migration successfully executed" id="create user table" duration=829.688µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.418435081Z level=info msg="Executing migration" id="add unique index user.login" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.419298856Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=862.075µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.422574299Z level=info msg="Executing migration" id="add unique index user.email" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.423377492Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=802.793µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.429060592Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.429826265Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=767.374µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.432811363Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.433539808Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=728.275µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.436299534Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.439240241Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.939967ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.444638196Z level=info msg="Executing migration" id="create user table v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.445476237Z level=info msg="Migration successfully executed" id="create user table v2" duration=834.16µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.448534551Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.449704776Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.167435ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.45302839Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.454196206Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.167385ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.459438075Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.460010835Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=573.12µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.462967856Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.463652481Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=686.646µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.466682379Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.467862427Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.181629ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.470979034Z level=info msg="Executing migration" id="Update user table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.471024474Z level=info msg="Migration successfully executed" id="Update user table charset" duration=48.401µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.476450146Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.477732866Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.282841ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.480887212Z level=info msg="Executing migration" id="Add missing user data" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.481165225Z level=info msg="Migration successfully executed" id="Add missing user data" duration=277.873µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.484074075Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:host.name=9f872bbe5af4 (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:56 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:56 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:56 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:56 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:56 mariadb | 23:16:56 mariadb | 2024-02-05 23:14:24 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:56 mariadb | 2024-02-05 23:14:24 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:56 mariadb | 2024-02-05 23:14:24+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:56 mariadb | 2024-02-05 23:14:24 0 [Note] InnoDB: Starting shutdown... 23:16:56 mariadb | 2024-02-05 23:14:24 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:56 mariadb | 2024-02-05 23:14:24 0 [Note] InnoDB: Buffer pool(s) dump completed at 240205 23:14:24 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Shutdown completed; log sequence number 339701; transaction id 298 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] mariadbd: Shutdown complete 23:16:56 mariadb | 23:16:56 mariadb | 2024-02-05 23:14:25+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:56 mariadb | 23:16:56 mariadb | 2024-02-05 23:14:25+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:56 mariadb | 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Number of transaction pools: 1 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: 128 rollback segments are active. 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: log sequence number 339701; transaction id 299 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] Server socket created on IP: '::'. 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] mariadbd: ready for connections. 23:16:56 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:56 mariadb | 2024-02-05 23:14:25 0 [Note] InnoDB: Buffer pool(s) load completed at 240205 23:14:25 23:16:56 mariadb | 2024-02-05 23:14:25 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:56 mariadb | 2024-02-05 23:14:26 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:56 mariadb | 2024-02-05 23:14:26 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:56 mariadb | 2024-02-05 23:14:26 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.485212984Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.138799ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.488202752Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.488901241Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=698.088µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.49396311Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.495097847Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.134407ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.498140187Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.507827236Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.685529ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.510983212Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.511699924Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=716.343µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.514896549Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.515606591Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=704.931µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.520572917Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.521276908Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=703.56µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.524435725Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.525169411Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=732.706µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.530207864Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.530955324Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=747.059µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.534053567Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.534089625Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=32.047µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.538197598Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.538924362Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=732.036µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.541967483Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.542730726Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=763.403µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.548174981Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.54909204Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=920.679µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.552166207Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.55297313Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=809.004µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.558595266Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.562218058Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.621451ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.565478317Z level=info msg="Executing migration" id="create temp_user v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.566374702Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=901.255µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.569348546Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.570298101Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=950.806µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.573323248Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.574214991Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=893.633µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.579520494Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.580445314Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=926.051µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.584124749Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.585052209Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=920.099µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.588085038Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.588542982Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=459.564µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.594819416Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.595770882Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=951.765µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.598945832Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.599577506Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=632.034µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.60263404Z level=info msg="Executing migration" id="create star table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.603313514Z level=info msg="Migration successfully executed" id="create star table" duration=679.594µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.606929494Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.607621681Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=693.527µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.612542738Z level=info msg="Executing migration" id="create org table v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.613262402Z level=info msg="Migration successfully executed" id="create org table v1" duration=719.794µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.61753368Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:56 kafka | [2024-02-05 23:14:21,414] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,416] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) 23:16:56 kafka | [2024-02-05 23:14:21,419] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:56 kafka | [2024-02-05 23:14:21,424] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:21,431] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:56 kafka | [2024-02-05 23:14:21,432] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:21,438] INFO Socket connection established, initiating session, client: /172.17.0.6:33644, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:21,508] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003b5f80001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:56 kafka | [2024-02-05 23:14:21,513] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:56 kafka | [2024-02-05 23:14:21,783] INFO Cluster ID = GFmMeC8ERWyjG0XVKKQ9OQ (kafka.server.KafkaServer) 23:16:56 kafka | [2024-02-05 23:14:21,786] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:56 kafka | [2024-02-05 23:14:21,832] INFO KafkaConfig values: 23:16:56 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:56 kafka | alter.config.policy.class.name = null 23:16:56 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:56 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:56 kafka | authorizer.class.name = 23:16:56 kafka | auto.create.topics.enable = true 23:16:56 kafka | auto.include.jmx.reporter = true 23:16:56 kafka | auto.leader.rebalance.enable = true 23:16:56 kafka | background.threads = 10 23:16:56 kafka | broker.heartbeat.interval.ms = 2000 23:16:56 kafka | broker.id = 1 23:16:56 kafka | broker.id.generation.enable = true 23:16:56 kafka | broker.rack = null 23:16:56 kafka | broker.session.timeout.ms = 9000 23:16:56 kafka | client.quota.callback.class = null 23:16:56 kafka | compression.type = producer 23:16:56 kafka | connection.failed.authentication.delay.ms = 100 23:16:56 kafka | connections.max.idle.ms = 600000 23:16:56 kafka | connections.max.reauth.ms = 0 23:16:56 kafka | control.plane.listener.name = null 23:16:56 kafka | controlled.shutdown.enable = true 23:16:56 kafka | controlled.shutdown.max.retries = 3 23:16:56 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:56 kafka | controller.listener.names = null 23:16:56 kafka | controller.quorum.append.linger.ms = 25 23:16:56 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:56 kafka | controller.quorum.election.timeout.ms = 1000 23:16:56 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:56 kafka | controller.quorum.request.timeout.ms = 2000 23:16:56 kafka | controller.quorum.retry.backoff.ms = 20 23:16:56 kafka | controller.quorum.voters = [] 23:16:56 kafka | controller.quota.window.num = 11 23:16:56 kafka | controller.quota.window.size.seconds = 1 23:16:56 kafka | controller.socket.timeout.ms = 30000 23:16:56 kafka | create.topic.policy.class.name = null 23:16:56 kafka | default.replication.factor = 1 23:16:56 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:56 kafka | delegation.token.expiry.time.ms = 86400000 23:16:56 kafka | delegation.token.master.key = null 23:16:56 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:56 kafka | delegation.token.secret.key = null 23:16:56 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:56 kafka | delete.topic.enable = true 23:16:56 kafka | early.start.listeners = null 23:16:56 kafka | fetch.max.bytes = 57671680 23:16:56 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:56 kafka | group.consumer.assignors = [] 23:16:56 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:56 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:56 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:56 kafka | group.consumer.max.size = 2147483647 23:16:56 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:56 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:56 kafka | group.consumer.session.timeout.ms = 45000 23:16:56 kafka | group.coordinator.new.enable = false 23:16:56 kafka | group.coordinator.threads = 1 23:16:56 kafka | group.initial.rebalance.delay.ms = 3000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.618913624Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.379273ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.622143717Z level=info msg="Executing migration" id="create org_user table v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.623379917Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.241661ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.628408698Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.62925364Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=844.791µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.632095815Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.632915091Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=819.117µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.635764788Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.636580903Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=815.285µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.640016142Z level=info msg="Executing migration" id="Update org table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.640080677Z level=info msg="Migration successfully executed" id="Update org table charset" duration=65.645µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.64573578Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.645760726Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=25.995µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.648701793Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.648943898Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=240.834µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.698442591Z level=info msg="Executing migration" id="create dashboard table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.699825934Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.388555ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.705376003Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.706797687Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.421113ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.712038616Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.713181365Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.142118ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.717043122Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.717886003Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=845.531µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.721127188Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.721954826Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=827.327µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.725268857Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.726044634Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=775.687µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.730235195Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.736745753Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.504157ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.740049832Z level=info msg="Executing migration" id="create dashboard v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.74079068Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=740.678µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.744997515Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.745775182Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=777.296µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.749333559Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.750114456Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=780.507µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.753298658Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.753669203Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=368.244µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.758215974Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.759316795Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.101461ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.763898674Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.764018251Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=115.706µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.767488369Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.768840065Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.351606ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.771848068Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.773143322Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.294875ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.777167235Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.778431451Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.263906ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.782187084Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.783032356Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=845.601µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.786376054Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.788226815Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.85035ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.792252228Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.793066673Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=814.215µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.796208936Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.797008067Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=798.711µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.80098559Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.801011266Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.786µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.804274066Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.804297722Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=24.495µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.806925038Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.808836051Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.910412ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.813764239Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.815672063Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.912984ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.819730313Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.821843073Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.112529ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.824882683Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.826846218Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.963126ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.830273556Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.830547798Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=273.242µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.834802614Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.836247601Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.433685ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.839934098Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.841153985Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.219487ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.844691858Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.844731047Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=46.05µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.847996767Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.848877918Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=880.541µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.852784384Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.853457427Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=672.492µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.856638138Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.863619443Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.976513ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.866932004Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.867614479Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=678.944µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.871515655Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.872847337Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.331101ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.882357405Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.883682326Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.32454ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.887026444Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.887432406Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=406.352µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.890797421Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.891291753Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=493.902µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.893800342Z level=info msg="Executing migration" id="Add check_sum column" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.895316485Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.513873ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.897823184Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.898592159Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=769.774µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.902111318Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.902335249Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=223.66µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.905991598Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.906157766Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=163.507µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.908916952Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.909503345Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=586.343µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.912938194Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.914471232Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.532868ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.917277629Z level=info msg="Executing migration" id="create data_source table" 23:16:56 policy-api | Waiting for mariadb port 3306... 23:16:56 policy-api | mariadb (172.17.0.4:3306) open 23:16:56 policy-api | Waiting for policy-db-migrator port 6824... 23:16:56 policy-api | policy-db-migrator (172.17.0.8:6824) open 23:16:56 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:56 policy-api | 23:16:56 policy-api | . ____ _ __ _ _ 23:16:56 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:56 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:56 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:56 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:56 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:56 policy-api | :: Spring Boot :: (v3.1.4) 23:16:56 policy-api | 23:16:56 policy-api | [2024-02-05T23:14:34.528+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:56 policy-api | [2024-02-05T23:14:34.530+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:56 policy-api | [2024-02-05T23:14:36.201+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:56 policy-api | [2024-02-05T23:14:36.285+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 6 JPA repository interfaces. 23:16:56 policy-api | [2024-02-05T23:14:36.691+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:56 policy-api | [2024-02-05T23:14:36.692+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:56 policy-api | [2024-02-05T23:14:37.291+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:56 policy-api | [2024-02-05T23:14:37.299+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:56 policy-api | [2024-02-05T23:14:37.301+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:56 policy-api | [2024-02-05T23:14:37.302+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] 23:16:56 policy-api | [2024-02-05T23:14:37.395+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:56 policy-api | [2024-02-05T23:14:37.395+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2805 ms 23:16:56 policy-api | [2024-02-05T23:14:37.794+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:56 policy-api | [2024-02-05T23:14:37.857+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:56 policy-api | [2024-02-05T23:14:37.860+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:56 policy-api | [2024-02-05T23:14:37.909+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:56 policy-api | [2024-02-05T23:14:38.230+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:56 policy-api | [2024-02-05T23:14:38.250+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:56 policy-api | [2024-02-05T23:14:38.362+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 23:16:56 policy-api | [2024-02-05T23:14:38.364+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:56 policy-api | [2024-02-05T23:14:38.396+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 23:16:56 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:56 policy-apex-pdp | mariadb (172.17.0.4:3306) open 23:16:56 policy-apex-pdp | Waiting for kafka port 9092... 23:16:56 policy-apex-pdp | kafka (172.17.0.6:9092) open 23:16:56 policy-apex-pdp | Waiting for pap port 6969... 23:16:56 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:56 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.404+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.559+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:56 policy-apex-pdp | allow.auto.create.topics = true 23:16:56 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:56 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:56 policy-apex-pdp | auto.offset.reset = latest 23:16:56 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:56 policy-apex-pdp | check.crcs = true 23:16:56 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:56 policy-apex-pdp | client.id = consumer-447a3058-d755-46ac-8e2e-59b142489c6a-1 23:16:56 policy-apex-pdp | client.rack = 23:16:56 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:56 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:56 policy-apex-pdp | enable.auto.commit = true 23:16:56 policy-apex-pdp | exclude.internal.topics = true 23:16:56 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:56 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:56 policy-apex-pdp | fetch.min.bytes = 1 23:16:56 policy-apex-pdp | group.id = 447a3058-d755-46ac-8e2e-59b142489c6a 23:16:56 policy-apex-pdp | group.instance.id = null 23:16:56 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:56 policy-apex-pdp | interceptor.classes = [] 23:16:56 policy-apex-pdp | internal.leave.group.on.close = true 23:16:56 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:56 policy-apex-pdp | isolation.level = read_uncommitted 23:16:56 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:56 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:56 policy-apex-pdp | max.poll.records = 500 23:16:56 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:56 policy-apex-pdp | metric.reporters = [] 23:16:56 policy-apex-pdp | metrics.num.samples = 2 23:16:56 policy-apex-pdp | metrics.recording.level = INFO 23:16:56 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:56 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:56 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:56 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:56 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:56 policy-apex-pdp | request.timeout.ms = 30000 23:16:56 policy-apex-pdp | retry.backoff.ms = 100 23:16:56 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:56 policy-apex-pdp | sasl.jaas.config = null 23:16:56 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 kafka | group.max.session.timeout.ms = 1800000 23:16:56 kafka | group.max.size = 2147483647 23:16:56 kafka | group.min.session.timeout.ms = 6000 23:16:56 kafka | initial.broker.registration.timeout.ms = 60000 23:16:56 kafka | inter.broker.listener.name = PLAINTEXT 23:16:56 kafka | inter.broker.protocol.version = 3.5-IV2 23:16:56 kafka | kafka.metrics.polling.interval.secs = 10 23:16:56 kafka | kafka.metrics.reporters = [] 23:16:56 kafka | leader.imbalance.check.interval.seconds = 300 23:16:56 kafka | leader.imbalance.per.broker.percentage = 10 23:16:56 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:56 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:56 kafka | log.cleaner.backoff.ms = 15000 23:16:56 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:56 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:56 kafka | log.cleaner.enable = true 23:16:56 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:56 kafka | log.cleaner.io.buffer.size = 524288 23:16:56 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:56 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:56 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:56 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:56 kafka | log.cleaner.threads = 1 23:16:56 kafka | log.cleanup.policy = [delete] 23:16:56 kafka | log.dir = /tmp/kafka-logs 23:16:56 kafka | log.dirs = /var/lib/kafka/data 23:16:56 kafka | log.flush.interval.messages = 9223372036854775807 23:16:56 kafka | log.flush.interval.ms = null 23:16:56 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:56 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:56 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:56 kafka | log.index.interval.bytes = 4096 23:16:56 kafka | log.index.size.max.bytes = 10485760 23:16:56 kafka | log.message.downconversion.enable = true 23:16:56 kafka | log.message.format.version = 3.0-IV1 23:16:56 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:56 kafka | log.message.timestamp.type = CreateTime 23:16:56 kafka | log.preallocate = false 23:16:56 kafka | log.retention.bytes = -1 23:16:56 kafka | log.retention.check.interval.ms = 300000 23:16:56 kafka | log.retention.hours = 168 23:16:56 kafka | log.retention.minutes = null 23:16:56 kafka | log.retention.ms = null 23:16:56 kafka | log.roll.hours = 168 23:16:56 kafka | log.roll.jitter.hours = 0 23:16:56 kafka | log.roll.jitter.ms = null 23:16:56 kafka | log.roll.ms = null 23:16:56 kafka | log.segment.bytes = 1073741824 23:16:56 kafka | log.segment.delete.delay.ms = 60000 23:16:56 kafka | max.connection.creation.rate = 2147483647 23:16:56 kafka | max.connections = 2147483647 23:16:56 kafka | max.connections.per.ip = 2147483647 23:16:56 kafka | max.connections.per.ip.overrides = 23:16:56 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:56 kafka | message.max.bytes = 1048588 23:16:56 kafka | metadata.log.dir = null 23:16:56 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:56 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:56 kafka | metadata.log.segment.bytes = 1073741824 23:16:56 kafka | metadata.log.segment.min.bytes = 8388608 23:16:56 kafka | metadata.log.segment.ms = 604800000 23:16:56 kafka | metadata.max.idle.interval.ms = 500 23:16:56 kafka | metadata.max.retention.bytes = 104857600 23:16:56 kafka | metadata.max.retention.ms = 604800000 23:16:56 kafka | metric.reporters = [] 23:16:56 policy-api | [2024-02-05T23:14:38.398+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 23:16:56 policy-api | [2024-02-05T23:14:40.222+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:56 policy-api | [2024-02-05T23:14:40.227+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:56 policy-api | [2024-02-05T23:14:41.508+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:56 policy-api | [2024-02-05T23:14:42.255+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:56 policy-api | [2024-02-05T23:14:43.302+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:56 policy-api | [2024-02-05T23:14:43.484+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@19a7e618, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@22ccd80f, org.springframework.security.web.context.SecurityContextHolderFilter@2f29400e, org.springframework.security.web.header.HeaderWriterFilter@56d3e4a9, org.springframework.security.web.authentication.logout.LogoutFilter@ab8b1ef, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@543d242e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@547a79cd, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@25e7e6d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@31829b82, org.springframework.security.web.access.ExceptionTranslationFilter@36c6d53b, org.springframework.security.web.access.intercept.AuthorizationFilter@680f7a5e] 23:16:56 policy-api | [2024-02-05T23:14:44.330+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:56 policy-api | [2024-02-05T23:14:44.391+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:56 policy-api | [2024-02-05T23:14:44.423+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:56 policy-api | [2024-02-05T23:14:44.440+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.63 seconds (process running for 11.197) 23:16:56 policy-api | [2024-02-05T23:15:03.221+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:56 policy-api | [2024-02-05T23:15:03.221+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:56 policy-api | [2024-02-05T23:15:03.223+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:16:56 policy-api | [2024-02-05T23:15:03.494+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:16:56 policy-api | [] 23:16:56 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:56 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:56 policy-apex-pdp | sasl.login.class = null 23:16:56 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:56 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:56 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:56 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:56 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:56 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:56 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:56 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:56 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:56 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:56 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:56 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:56 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:56 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:56 policy-apex-pdp | security.providers = null 23:16:56 policy-apex-pdp | send.buffer.bytes = 131072 23:16:56 policy-apex-pdp | session.timeout.ms = 45000 23:16:56 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:56 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:56 policy-apex-pdp | ssl.cipher.suites = null 23:16:56 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:56 policy-apex-pdp | ssl.engine.factory.class = null 23:16:56 policy-apex-pdp | ssl.key.password = null 23:16:56 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:56 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:56 policy-apex-pdp | ssl.keystore.key = null 23:16:56 policy-apex-pdp | ssl.keystore.location = null 23:16:56 policy-apex-pdp | ssl.keystore.password = null 23:16:56 policy-apex-pdp | ssl.keystore.type = JKS 23:16:56 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:56 policy-apex-pdp | ssl.provider = null 23:16:56 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.918194367Z level=info msg="Migration successfully executed" id="create data_source table" duration=915.898µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.921315265Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.922162948Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=847.433µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.926996394Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.927827143Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=830.419µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.93182706Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.932604447Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=777.226µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.936752758Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.937516491Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=759.392µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.941553478Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.9494875Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.939202ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.952560427Z level=info msg="Executing migration" id="create data_source table v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.953386004Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=825.107µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.956398828Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.957358185Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=959.017µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.961303461Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.962070455Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=766.563µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.966089527Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.966692394Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=600.847µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.969778565Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.972077667Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.298572ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.975959337Z level=info msg="Executing migration" id="Add secure json data column" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.978237095Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.276927ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.981436261Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.981488913Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=54.002µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.985126778Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.985536081Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=408.753µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.989948213Z level=info msg="Executing migration" id="Add read_only data column" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.992562966Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.621986ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.995844511Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.996098569Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=257.668µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.998714812Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:18.998954877Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=239.684µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.002177378Z level=info msg="Executing migration" id="Add uid column" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.004586789Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.410392ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.008889852Z level=info msg="Executing migration" id="Update uid value" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.009158623Z level=info msg="Migration successfully executed" id="Update uid value" duration=268.231µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.01248309Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.013531729Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.048559ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.01678055Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.017673223Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=889.242µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.021587905Z level=info msg="Executing migration" id="create api_key table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.022467746Z level=info msg="Migration successfully executed" id="create api_key table" duration=885.233µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.025719357Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.026578102Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=858.625µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.029620486Z level=info msg="Executing migration" id="add index api_key.key" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.030457316Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=836.29µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.034534796Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.035620133Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.084967ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.040494744Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.041282043Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=787.139µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.044201488Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.044990459Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=789.05µs 23:16:56 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:56 policy-apex-pdp | ssl.truststore.certificates = null 23:16:56 policy-apex-pdp | ssl.truststore.location = null 23:16:56 policy-apex-pdp | ssl.truststore.password = null 23:16:56 policy-apex-pdp | ssl.truststore.type = JKS 23:16:56 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 policy-apex-pdp | 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.699+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.699+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.699+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174898698 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.701+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-1, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Subscribed to topic(s): policy-pdp-pap 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.713+00:00|INFO|ServiceManager|main] service manager starting 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.713+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.719+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=447a3058-d755-46ac-8e2e-59b142489c6a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.738+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:56 policy-apex-pdp | allow.auto.create.topics = true 23:16:56 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:56 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:56 policy-apex-pdp | auto.offset.reset = latest 23:16:56 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:56 policy-apex-pdp | check.crcs = true 23:16:56 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:56 policy-apex-pdp | client.id = consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2 23:16:56 policy-apex-pdp | client.rack = 23:16:56 kafka | metrics.num.samples = 2 23:16:56 kafka | metrics.recording.level = INFO 23:16:56 kafka | metrics.sample.window.ms = 30000 23:16:56 kafka | min.insync.replicas = 1 23:16:56 kafka | node.id = 1 23:16:56 kafka | num.io.threads = 8 23:16:56 kafka | num.network.threads = 3 23:16:56 kafka | num.partitions = 1 23:16:56 kafka | num.recovery.threads.per.data.dir = 1 23:16:56 kafka | num.replica.alter.log.dirs.threads = null 23:16:56 kafka | num.replica.fetchers = 1 23:16:56 kafka | offset.metadata.max.bytes = 4096 23:16:56 kafka | offsets.commit.required.acks = -1 23:16:56 kafka | offsets.commit.timeout.ms = 5000 23:16:56 kafka | offsets.load.buffer.size = 5242880 23:16:56 kafka | offsets.retention.check.interval.ms = 600000 23:16:56 kafka | offsets.retention.minutes = 10080 23:16:56 kafka | offsets.topic.compression.codec = 0 23:16:56 kafka | offsets.topic.num.partitions = 50 23:16:56 kafka | offsets.topic.replication.factor = 1 23:16:56 kafka | offsets.topic.segment.bytes = 104857600 23:16:56 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:56 kafka | password.encoder.iterations = 4096 23:16:56 kafka | password.encoder.key.length = 128 23:16:56 kafka | password.encoder.keyfactory.algorithm = null 23:16:56 kafka | password.encoder.old.secret = null 23:16:56 kafka | password.encoder.secret = null 23:16:56 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:56 kafka | process.roles = [] 23:16:56 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:56 kafka | producer.id.expiration.ms = 86400000 23:16:56 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:56 kafka | queued.max.request.bytes = -1 23:16:56 kafka | queued.max.requests = 500 23:16:56 kafka | quota.window.num = 11 23:16:56 kafka | quota.window.size.seconds = 1 23:16:56 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:56 kafka | remote.log.manager.task.interval.ms = 30000 23:16:56 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:56 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:56 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:56 kafka | remote.log.manager.thread.pool.size = 10 23:16:56 kafka | remote.log.metadata.manager.class.name = null 23:16:56 kafka | remote.log.metadata.manager.class.path = null 23:16:56 kafka | remote.log.metadata.manager.impl.prefix = null 23:16:56 kafka | remote.log.metadata.manager.listener.name = null 23:16:56 kafka | remote.log.reader.max.pending.tasks = 100 23:16:56 kafka | remote.log.reader.threads = 10 23:16:56 kafka | remote.log.storage.manager.class.name = null 23:16:56 kafka | remote.log.storage.manager.class.path = null 23:16:56 kafka | remote.log.storage.manager.impl.prefix = null 23:16:56 kafka | remote.log.storage.system.enable = false 23:16:56 kafka | replica.fetch.backoff.ms = 1000 23:16:56 kafka | replica.fetch.max.bytes = 1048576 23:16:56 kafka | replica.fetch.min.bytes = 1 23:16:56 kafka | replica.fetch.response.max.bytes = 10485760 23:16:56 kafka | replica.fetch.wait.max.ms = 500 23:16:56 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:56 kafka | replica.lag.time.max.ms = 30000 23:16:56 kafka | replica.selector.class = null 23:16:56 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:56 kafka | replica.socket.timeout.ms = 30000 23:16:56 kafka | replication.quota.window.num = 11 23:16:56 kafka | replication.quota.window.size.seconds = 1 23:16:56 kafka | request.timeout.ms = 30000 23:16:56 kafka | reserved.broker.max.id = 1000 23:16:56 kafka | sasl.client.callback.handler.class = null 23:16:56 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:56 kafka | sasl.jaas.config = null 23:16:56 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:56 kafka | sasl.kerberos.service.name = null 23:16:56 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 kafka | sasl.login.callback.handler.class = null 23:16:56 kafka | sasl.login.class = null 23:16:56 kafka | sasl.login.connect.timeout.ms = null 23:16:56 kafka | sasl.login.read.timeout.ms = null 23:16:56 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:56 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:56 kafka | sasl.login.refresh.window.factor = 0.8 23:16:56 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:56 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.049163799Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.049948478Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=783.238µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.053014316Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.06171467Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.699933ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.065983742Z level=info msg="Executing migration" id="create api_key table v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.066760659Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=776.397µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.069949155Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.070829306Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=879.891µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.074076406Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.074947844Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=871.048µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.079187791Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.080059939Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=877.12µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.083229332Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.083637474Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=409.163µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.086952411Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.087622233Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=674.073µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.091435432Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.091460147Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.586µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.094628229Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.09717912Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.54789ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.100111869Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.102595685Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.483016ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.105455656Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.105693121Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=236.974µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.109610474Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.112067844Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.45699ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.115361104Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.117919547Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.557793ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.121084808Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.121847392Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=762.153µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.125486621Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.126098301Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=611.321µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.152653962Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.153930712Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.275941ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.157461118Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.158853044Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.391267ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.163276213Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.164133908Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=856.855µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.167394071Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.168350629Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=955.189µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.17160355Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.171751554Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=147.193µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.175712856Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.175737282Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=25.366µs 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.178232931Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:56 policy-pap | Waiting for mariadb port 3306... 23:16:56 policy-pap | mariadb (172.17.0.4:3306) open 23:16:56 policy-pap | Waiting for kafka port 9092... 23:16:56 policy-pap | kafka (172.17.0.6:9092) open 23:16:56 policy-pap | Waiting for api port 6969... 23:16:56 policy-pap | api (172.17.0.9:6969) open 23:16:56 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:56 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:56 policy-pap | 23:16:56 policy-pap | . ____ _ __ _ _ 23:16:56 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:56 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:56 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:56 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:56 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:56 policy-pap | :: Spring Boot :: (v3.1.7) 23:16:56 policy-pap | 23:16:56 policy-pap | [2024-02-05T23:14:47.742+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 35 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:56 policy-pap | [2024-02-05T23:14:47.744+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:56 policy-pap | [2024-02-05T23:14:49.556+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:56 policy-pap | [2024-02-05T23:14:49.661+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 7 JPA repository interfaces. 23:16:56 policy-pap | [2024-02-05T23:14:50.122+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:56 policy-pap | [2024-02-05T23:14:50.123+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:56 policy-pap | [2024-02-05T23:14:50.841+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:56 policy-pap | [2024-02-05T23:14:50.851+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:56 policy-pap | [2024-02-05T23:14:50.853+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:56 policy-pap | [2024-02-05T23:14:50.854+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.180936807Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.705516ms 23:16:56 kafka | sasl.login.retry.backoff.ms = 100 23:16:56 policy-pap | [2024-02-05T23:14:50.944+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:56 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.18406471Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:56 policy-db-migrator | Waiting for mariadb port 3306... 23:16:56 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:56 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:56 policy-pap | [2024-02-05T23:14:50.944+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3121 ms 23:16:56 prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:56 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.18696528Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.900311ms 23:16:56 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:56 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:56 simulator | overriding logback.xml 23:16:56 policy-pap | [2024-02-05T23:14:51.370+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:56 prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" 23:16:56 policy-apex-pdp | enable.auto.commit = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.190439922Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:56 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:56 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 simulator | 2024-02-05 23:14:23,956 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:56 policy-pap | [2024-02-05T23:14:51.461+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:56 prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" 23:16:56 policy-apex-pdp | exclude.internal.topics = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.19055829Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=118.168µs 23:16:56 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:56 kafka | sasl.oauthbearer.expected.audience = null 23:16:56 simulator | 2024-02-05 23:14:24,035 INFO org.onap.policy.models.simulators starting 23:16:56 policy-pap | [2024-02-05T23:14:51.465+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:56 prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:56 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.194556511Z level=info msg="Executing migration" id="create quota table v1" 23:16:56 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:56 kafka | sasl.oauthbearer.expected.issuer = null 23:16:56 simulator | 2024-02-05 23:14:24,035 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:56 policy-pap | [2024-02-05T23:14:51.518+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:56 prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:56 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.195141884Z level=info msg="Migration successfully executed" id="create quota table v1" duration=585.473µs 23:16:56 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:56 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 simulator | 2024-02-05 23:14:24,253 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:56 policy-pap | [2024-02-05T23:14:51.914+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:56 prometheus | ts=2024-02-05T23:14:17.387Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:56 policy-apex-pdp | fetch.min.bytes = 1 23:16:56 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.198470112Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:56 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 simulator | 2024-02-05 23:14:24,254 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:56 policy-pap | [2024-02-05T23:14:51.936+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:56 prometheus | ts=2024-02-05T23:14:17.389Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:56 policy-apex-pdp | group.id = 447a3058-d755-46ac-8e2e-59b142489c6a 23:16:56 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.199470031Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.002779ms 23:16:56 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 simulator | 2024-02-05 23:14:24,346 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:56 policy-pap | [2024-02-05T23:14:52.050+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4068102e 23:16:56 prometheus | ts=2024-02-05T23:14:17.390Z caller=main.go:1039 level=info msg="Starting TSDB ..." 23:16:56 policy-apex-pdp | group.instance.id = null 23:16:56 policy-db-migrator | 321 blocks 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.202644924Z level=info msg="Executing migration" id="Update quota table charset" 23:16:56 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 simulator | 2024-02-05 23:14:24,357 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 policy-pap | [2024-02-05T23:14:52.052+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:56 prometheus | ts=2024-02-05T23:14:17.394Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 23:16:56 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:56 policy-db-migrator | Preparing upgrade release version: 0800 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.202676431Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=32.658µs 23:16:56 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:56 simulator | 2024-02-05 23:14:24,359 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 policy-pap | [2024-02-05T23:14:52.081+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 23:16:56 prometheus | ts=2024-02-05T23:14:17.396Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:56 policy-apex-pdp | interceptor.classes = [] 23:16:56 policy-db-migrator | Preparing upgrade release version: 0900 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.206906324Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:56 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:56 simulator | 2024-02-05 23:14:24,366 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 23:16:56 policy-pap | [2024-02-05T23:14:52.082+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 23:16:56 prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:56 policy-apex-pdp | internal.leave.group.on.close = true 23:16:56 policy-db-migrator | Preparing upgrade release version: 1000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.207644843Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=731.237µs 23:16:56 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:56 simulator | 2024-02-05 23:14:24,423 INFO Session workerName=node0 23:16:56 policy-pap | [2024-02-05T23:14:53.952+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:56 prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.401µs 23:16:56 prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:56 policy-db-migrator | Preparing upgrade release version: 1100 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.210697269Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:56 kafka | sasl.server.callback.handler.class = null 23:16:56 simulator | 2024-02-05 23:14:24,893 INFO Using GSON for REST calls 23:16:56 policy-pap | [2024-02-05T23:14:53.956+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:56 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:56 prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:56 policy-db-migrator | Preparing upgrade release version: 1200 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.212221386Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.523887ms 23:16:56 kafka | sasl.server.max.receive.size = 524288 23:16:56 simulator | 2024-02-05 23:14:24,956 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} 23:16:56 policy-pap | [2024-02-05T23:14:54.560+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:56 policy-apex-pdp | isolation.level = read_uncommitted 23:16:56 prometheus | ts=2024-02-05T23:14:17.398Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=122.428µs wal_replay_duration=287.356µs wbl_replay_duration=260ns total_replay_duration=485.851µs 23:16:56 policy-db-migrator | Preparing upgrade release version: 1300 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.215957948Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:56 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:56 simulator | 2024-02-05 23:14:24,969 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:56 policy-pap | [2024-02-05T23:14:55.158+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:56 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 prometheus | ts=2024-02-05T23:14:17.402Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC 23:16:56 policy-db-migrator | Done 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.222897609Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=6.941711ms 23:16:56 kafka | security.providers = null 23:16:56 simulator | 2024-02-05 23:14:24,977 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1568ms 23:16:56 policy-pap | [2024-02-05T23:14:55.284+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:56 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:56 prometheus | ts=2024-02-05T23:14:17.402Z caller=main.go:1063 level=info msg="TSDB started" 23:16:56 policy-db-migrator | name version 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.226833815Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:56 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:56 simulator | 2024-02-05 23:14:24,977 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4382 ms. 23:16:56 policy-pap | [2024-02-05T23:14:55.559+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:56 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:56 prometheus | ts=2024-02-05T23:14:17.402Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:56 policy-db-migrator | policyadmin 0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.226869323Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=40.079µs 23:16:56 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:56 simulator | 2024-02-05 23:14:24,981 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:56 policy-pap | allow.auto.create.topics = true 23:16:56 policy-apex-pdp | max.poll.records = 500 23:16:56 prometheus | ts=2024-02-05T23:14:17.403Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.172308ms db_storage=1.35µs remote_storage=3.091µs web_handler=670ns query_engine=1.4µs scrape=221.39µs scrape_sd=116.137µs notify=23.865µs notify_sd=13.123µs rules=2.061µs tracing=5.501µs 23:16:56 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.230175358Z level=info msg="Executing migration" id="create session table" 23:16:56 kafka | socket.connection.setup.timeout.ms = 10000 23:16:56 simulator | 2024-02-05 23:14:24,983 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:56 policy-pap | auto.commit.interval.ms = 5000 23:16:56 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:56 prometheus | ts=2024-02-05T23:14:17.403Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." 23:16:56 policy-db-migrator | upgrade: 0 -> 1300 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.231303364Z level=info msg="Migration successfully executed" id="create session table" duration=1.127667ms 23:16:56 kafka | socket.listen.backlog.size = 50 23:16:56 simulator | 2024-02-05 23:14:24,984 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 policy-pap | auto.include.jmx.reporter = true 23:16:56 policy-apex-pdp | metric.reporters = [] 23:16:56 prometheus | ts=2024-02-05T23:14:17.404Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.235513984Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:56 kafka | socket.receive.buffer.bytes = 102400 23:16:56 simulator | 2024-02-05 23:14:24,985 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 policy-pap | auto.offset.reset = latest 23:16:56 policy-apex-pdp | metrics.num.samples = 2 23:16:56 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.235623669Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=109.385µs 23:16:56 kafka | socket.request.max.bytes = 104857600 23:16:56 simulator | 2024-02-05 23:14:24,986 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 23:16:56 policy-pap | bootstrap.servers = [kafka:9092] 23:16:56 policy-apex-pdp | metrics.recording.level = INFO 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.238968331Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:56 kafka | socket.send.buffer.bytes = 102400 23:16:56 simulator | 2024-02-05 23:14:24,996 INFO Session workerName=node0 23:16:56 policy-pap | check.crcs = true 23:16:56 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.239045269Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=77.457µs 23:16:56 kafka | ssl.cipher.suites = [] 23:16:56 simulator | 2024-02-05 23:14:25,073 INFO Using GSON for REST calls 23:16:56 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:56 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.243124858Z level=info msg="Executing migration" id="create playlist table v2" 23:16:56 kafka | ssl.client.auth = none 23:16:56 simulator | 2024-02-05 23:14:25,083 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} 23:16:56 policy-pap | client.id = consumer-82113737-2238-440a-b31e-67419d0ce49a-1 23:16:56 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.243822738Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=697.47µs 23:16:56 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 simulator | 2024-02-05 23:14:25,084 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:56 policy-pap | client.rack = 23:16:56 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.247251149Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:56 kafka | ssl.endpoint.identification.algorithm = https 23:16:56 simulator | 2024-02-05 23:14:25,084 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1675ms 23:16:56 policy-pap | connections.max.idle.ms = 540000 23:16:56 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:56 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.247977664Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=726.045µs 23:16:56 kafka | ssl.engine.factory.class = null 23:16:56 policy-pap | default.api.timeout.ms = 60000 23:16:56 simulator | 2024-02-05 23:14:25,084 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4900 ms. 23:16:56 policy-apex-pdp | request.timeout.ms = 30000 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.251424839Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:56 kafka | ssl.key.password = null 23:16:56 policy-pap | enable.auto.commit = true 23:16:56 simulator | 2024-02-05 23:14:25,086 INFO org.onap.policy.models.simulators starting SO simulator 23:16:56 policy-apex-pdp | retry.backoff.ms = 100 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.251449935Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.867µs 23:16:56 kafka | ssl.keymanager.algorithm = SunX509 23:16:56 policy-pap | exclude.internal.topics = true 23:16:56 simulator | 2024-02-05 23:14:25,089 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:56 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.25568357Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:56 kafka | ssl.keystore.certificate.chain = null 23:16:56 policy-pap | fetch.max.bytes = 52428800 23:16:56 simulator | 2024-02-05 23:14:25,090 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 policy-apex-pdp | sasl.jaas.config = null 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.255710026Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.436µs 23:16:56 kafka | ssl.keystore.key = null 23:16:56 policy-pap | fetch.max.wait.ms = 500 23:16:56 simulator | 2024-02-05 23:14:25,092 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.25906462Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:56 kafka | ssl.keystore.location = null 23:16:56 policy-pap | fetch.min.bytes = 1 23:16:56 simulator | 2024-02-05 23:14:25,093 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 23:16:56 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.262114345Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.049495ms 23:16:56 kafka | ssl.keystore.password = null 23:16:56 policy-pap | group.id = 82113737-2238-440a-b31e-67419d0ce49a 23:16:56 simulator | 2024-02-05 23:14:25,107 INFO Session workerName=node0 23:16:56 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.265545147Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:56 kafka | ssl.keystore.type = JKS 23:16:56 policy-pap | group.instance.id = null 23:16:56 simulator | 2024-02-05 23:14:25,180 INFO Using GSON for REST calls 23:16:56 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.268627469Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.081973ms 23:16:56 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:56 policy-pap | heartbeat.interval.ms = 3000 23:16:56 simulator | 2024-02-05 23:14:25,192 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} 23:16:56 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.272044609Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:56 kafka | ssl.protocol = TLSv1.3 23:16:56 policy-pap | interceptor.classes = [] 23:16:56 simulator | 2024-02-05 23:14:25,193 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:56 policy-db-migrator | 23:16:56 kafka | ssl.provider = null 23:16:56 policy-pap | internal.leave.group.on.close = true 23:16:56 simulator | 2024-02-05 23:14:25,193 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @1785ms 23:16:56 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.272126007Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=81.719µs 23:16:56 policy-db-migrator | 23:16:56 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:56 simulator | 2024-02-05 23:14:25,194 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4899 ms. 23:16:56 policy-apex-pdp | sasl.login.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.276285094Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:56 kafka | ssl.secure.random.implementation = null 23:16:56 policy-pap | isolation.level = read_uncommitted 23:16:56 simulator | 2024-02-05 23:14:25,195 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:56 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.276365533Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=84.839µs 23:16:56 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:56 kafka | ssl.trustmanager.algorithm = PKIX 23:16:56 simulator | 2024-02-05 23:14:25,199 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:56 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.278978718Z level=info msg="Executing migration" id="create preferences table v3" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 kafka | ssl.truststore.certificates = null 23:16:56 simulator | 2024-02-05 23:14:25,200 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.279744763Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=765.574µs 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:56 policy-pap | max.partition.fetch.bytes = 1048576 23:16:56 kafka | ssl.truststore.location = null 23:16:56 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.285084409Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | max.poll.interval.ms = 300000 23:16:56 simulator | 2024-02-05 23:14:25,206 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 kafka | ssl.truststore.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.285126509Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=53.013µs 23:16:56 policy-db-migrator | 23:16:56 policy-pap | max.poll.records = 500 23:16:56 simulator | 2024-02-05 23:14:25,207 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 23:16:56 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:56 kafka | ssl.truststore.type = JKS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.289799094Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | metadata.max.age.ms = 300000 23:16:56 simulator | 2024-02-05 23:14:25,215 INFO Session workerName=node0 23:16:56 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:56 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.294506367Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.69151ms 23:16:56 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:56 policy-pap | metric.reporters = [] 23:16:56 simulator | 2024-02-05 23:14:25,256 INFO Using GSON for REST calls 23:16:56 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:56 kafka | transaction.max.timeout.ms = 900000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.297815721Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | metrics.num.samples = 2 23:16:56 simulator | 2024-02-05 23:14:25,264 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} 23:16:56 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:56 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.29803027Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=214.248µs 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:56 policy-pap | metrics.recording.level = INFO 23:16:56 simulator | 2024-02-05 23:14:25,265 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:56 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:56 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.301502531Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | metrics.sample.window.ms = 30000 23:16:56 simulator | 2024-02-05 23:14:25,265 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @1856ms 23:16:56 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 kafka | transaction.state.log.min.isr = 2 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.30457216Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.06904ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:56 simulator | 2024-02-05 23:14:25,265 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4941 ms. 23:16:56 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:56 kafka | transaction.state.log.num.partitions = 50 23:16:56 policy-db-migrator | 23:16:56 policy-pap | receive.buffer.bytes = 65536 23:16:56 simulator | 2024-02-05 23:14:25,266 INFO org.onap.policy.models.simulators started 23:16:56 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.307957421Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:56 kafka | transaction.state.log.replication.factor = 3 23:16:56 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:56 policy-pap | reconnect.backoff.max.ms = 1000 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.311069361Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.11151ms 23:16:56 kafka | transaction.state.log.segment.bytes = 104857600 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | reconnect.backoff.ms = 50 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.3152326Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:56 kafka | transactional.id.expiration.ms = 604800000 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:56 policy-pap | request.timeout.ms = 30000 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.315300505Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=68.306µs 23:16:56 kafka | unclean.leader.election.enable = false 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | retry.backoff.ms = 100 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.318671823Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:56 kafka | unstable.api.versions.enable = false 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.client.callback.handler.class = null 23:16:56 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.31961954Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=947.466µs 23:16:56 kafka | zookeeper.clientCnxnSocket = null 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.jaas.config = null 23:16:56 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.323139412Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:56 kafka | zookeeper.connect = zookeeper:2181 23:16:56 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:56 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.324017591Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=877.85µs 23:16:56 kafka | zookeeper.connection.timeout.ms = null 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.328015093Z level=info msg="Executing migration" id="create alert table v1" 23:16:56 kafka | zookeeper.max.in.flight.requests = 10 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-pap | sasl.kerberos.service.name = null 23:16:56 policy-apex-pdp | security.providers = null 23:16:56 kafka | zookeeper.metadata.migration.enable = false 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.329073284Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.057741ms 23:16:56 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 policy-apex-pdp | send.buffer.bytes = 131072 23:16:56 kafka | zookeeper.session.timeout.ms = 18000 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.332219511Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:56 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 policy-apex-pdp | session.timeout.ms = 45000 23:16:56 kafka | zookeeper.set.acl = false 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.333209346Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=984.274µs 23:16:56 policy-pap | sasl.login.callback.handler.class = null 23:16:56 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:56 kafka | zookeeper.ssl.cipher.suites = null 23:16:56 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.337477909Z level=info msg="Executing migration" id="add index alert state" 23:16:56 policy-pap | sasl.login.class = null 23:16:56 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:56 kafka | zookeeper.ssl.client.enable = false 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.338368301Z level=info msg="Migration successfully executed" id="add index alert state" duration=890.023µs 23:16:56 policy-apex-pdp | ssl.cipher.suites = null 23:16:56 kafka | zookeeper.ssl.crl.enable = false 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.353767411Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:56 policy-pap | sasl.login.connect.timeout.ms = null 23:16:56 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 kafka | zookeeper.ssl.enabled.protocols = null 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:56 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.354444835Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=677.954µs 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.login.read.timeout.ms = null 23:16:56 policy-apex-pdp | ssl.engine.factory.class = null 23:16:56 kafka | zookeeper.ssl.keystore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.357345556Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:56 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:56 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:56 policy-apex-pdp | ssl.key.password = null 23:16:56 kafka | zookeeper.ssl.keystore.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.35793294Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=586.953µs 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:56 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.362036335Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 kafka | zookeeper.ssl.keystore.type = null 23:16:56 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:56 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.363449467Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.412601ms 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | zookeeper.ssl.ocsp.enable = false 23:16:56 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:56 policy-apex-pdp | ssl.keystore.key = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.366648046Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:56 policy-db-migrator | 23:16:56 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:56 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:56 policy-apex-pdp | ssl.keystore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.367403969Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=755.833µs 23:16:56 policy-db-migrator | 23:16:56 kafka | zookeeper.ssl.truststore.location = null 23:16:56 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:56 policy-apex-pdp | ssl.keystore.password = null 23:16:56 policy-apex-pdp | ssl.keystore.type = JKS 23:16:56 kafka | zookeeper.ssl.truststore.password = null 23:16:56 policy-pap | sasl.mechanism = GSSAPI 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.376102371Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:56 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:56 policy-apex-pdp | ssl.provider = null 23:16:56 kafka | zookeeper.ssl.truststore.type = null 23:16:56 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.386873185Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.769354ms 23:16:56 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:56 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:56 kafka | (kafka.server.KafkaConfig) 23:16:56 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.391641221Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:56 policy-apex-pdp | ssl.truststore.certificates = null 23:16:56 policy-apex-pdp | ssl.truststore.location = null 23:16:56 kafka | [2024-02-05 23:14:21,861] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:56 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.392393513Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=756.493µs 23:16:56 policy-apex-pdp | ssl.truststore.password = null 23:16:56 policy-apex-pdp | ssl.truststore.type = JKS 23:16:56 kafka | [2024-02-05 23:14:21,862] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.39663971Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:56 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 policy-apex-pdp | 23:16:56 kafka | [2024-02-05 23:14:21,865] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.397350322Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=710.272µs 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.746+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.746+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 kafka | [2024-02-05 23:14:21,866] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.400684253Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.746+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174898746 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.746+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Subscribed to topic(s): policy-pdp-pap 23:16:56 kafka | [2024-02-05 23:14:21,892] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.401031612Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=347.339µs 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.751+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=331ed2f3-3c3d-4edb-a439-5458d1b7d3bd, alive=false, publisher=null]]: starting 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.764+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:56 kafka | [2024-02-05 23:14:21,895] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:56 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.404677752Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:56 policy-apex-pdp | acks = -1 23:16:56 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:56 kafka | [2024-02-05 23:14:21,904] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) 23:16:56 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.405915144Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.239952ms 23:16:56 policy-apex-pdp | batch.size = 16384 23:16:56 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:56 kafka | [2024-02-05 23:14:21,906] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:56 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.412062515Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:56 policy-apex-pdp | buffer.memory = 33554432 23:16:56 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:56 kafka | [2024-02-05 23:14:21,907] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:56 policy-pap | security.protocol = PLAINTEXT 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.412890344Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=827.908µs 23:16:56 policy-apex-pdp | client.id = producer-1 23:16:56 policy-apex-pdp | compression.type = none 23:16:56 kafka | [2024-02-05 23:14:21,916] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:56 policy-pap | security.providers = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.41713245Z level=info msg="Executing migration" id="Add column is_default" 23:16:56 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:56 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:56 kafka | [2024-02-05 23:14:21,958] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:56 policy-pap | send.buffer.bytes = 131072 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.420834894Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.702255ms 23:16:56 policy-apex-pdp | enable.idempotence = true 23:16:56 policy-apex-pdp | interceptor.classes = [] 23:16:56 kafka | [2024-02-05 23:14:21,972] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:56 policy-pap | session.timeout.ms = 45000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.42520842Z level=info msg="Executing migration" id="Add column frequency" 23:16:56 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:56 policy-apex-pdp | linger.ms = 0 23:16:56 kafka | [2024-02-05 23:14:21,983] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:56 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.42884857Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.63956ms 23:16:56 policy-apex-pdp | max.block.ms = 60000 23:16:56 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:56 kafka | [2024-02-05 23:14:22,007] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:56 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:56 policy-apex-pdp | max.request.size = 1048576 23:16:56 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:56 kafka | [2024-02-05 23:14:22,417] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:56 policy-pap | ssl.cipher.suites = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.438567615Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 kafka | [2024-02-05 23:14:22,444] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:56 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.444028059Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.462955ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.447596652Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:56 kafka | [2024-02-05 23:14:22,444] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:56 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.engine.factory.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.451111984Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.521793ms 23:16:56 kafka | [2024-02-05 23:14:22,450] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:56 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:56 policy-apex-pdp | metric.reporters = [] 23:16:56 policy-pap | ssl.key.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.455635724Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:56 kafka | [2024-02-05 23:14:22,454] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:56 policy-apex-pdp | metrics.num.samples = 2 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.456646715Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.0102ms 23:16:56 kafka | [2024-02-05 23:14:22,472] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:56 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.keystore.certificate.chain = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.461028453Z level=info msg="Executing migration" id="Update alert table charset" 23:16:56 kafka | [2024-02-05 23:14:22,474] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.keystore.key = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.461054339Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=27.247µs 23:16:56 kafka | [2024-02-05 23:14:22,478] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.keystore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.466641772Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:56 kafka | [2024-02-05 23:14:22,481] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:56 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.keystore.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.466711268Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=70.576µs 23:16:56 kafka | [2024-02-05 23:14:22,495] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.keystore.type = JKS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.472091574Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:56 kafka | [2024-02-05 23:14:22,517] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.protocol = TLSv1.3 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.472926814Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=836.31µs 23:16:56 kafka | [2024-02-05 23:14:22,551] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1707174862533,1707174862533,1,0,0,72057609975758849,258,0,27 23:16:56 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.provider = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.476292082Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:56 kafka | (kafka.zk.KafkaZkClient) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.secure.random.implementation = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.477494775Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.201784ms 23:16:56 kafka | [2024-02-05 23:14:22,552] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.481984279Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:56 kafka | [2024-02-05 23:14:22,602] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:56 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.truststore.certificates = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.483589484Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.605765ms 23:16:56 kafka | [2024-02-05 23:14:22,612] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.truststore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.487979735Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:56 kafka | [2024-02-05 23:14:22,620] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.truststore.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.488845622Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=861.846µs 23:16:56 kafka | [2024-02-05 23:14:22,621] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:56 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.truststore.type = JKS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.494097969Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:56 kafka | [2024-02-05 23:14:22,625] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.494808241Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=710.452µs 23:16:56 kafka | [2024-02-05 23:14:22,643] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-pap | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.49809857Z level=info msg="Executing migration" id="Add for to alert table" 23:16:56 kafka | [2024-02-05 23:14:22,646] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:56 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:55.730+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.502520578Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.421738ms 23:16:56 kafka | [2024-02-05 23:14:22,648] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:55.731+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.510351463Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:56 kafka | [2024-02-05 23:14:22,650] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:55.731+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174895728 23:16:56 kafka | [2024-02-05 23:14:22,652] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.514981167Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.629505ms 23:16:56 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:55.733+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-1, groupId=82113737-2238-440a-b31e-67419d0ce49a] Subscribed to topic(s): policy-pdp-pap 23:16:56 kafka | [2024-02-05 23:14:22,666] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.520862157Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:55.734+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:56 kafka | [2024-02-05 23:14:22,670] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.521167378Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=299.519µs 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-pap | allow.auto.create.topics = true 23:16:56 kafka | [2024-02-05 23:14:22,671] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.526203044Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:56 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | auto.commit.interval.ms = 5000 23:16:56 kafka | [2024-02-05 23:14:22,681] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.528117111Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.913976ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | auto.include.jmx.reporter = true 23:16:56 kafka | [2024-02-05 23:14:22,682] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.533226215Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-pap | auto.offset.reset = latest 23:16:56 kafka | [2024-02-05 23:14:22,690] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.534085961Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=859.945µs 23:16:56 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | bootstrap.servers = [kafka:9092] 23:16:56 kafka | [2024-02-05 23:14:22,694] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.539736579Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | check.crcs = true 23:16:56 kafka | [2024-02-05 23:14:22,700] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:56 policy-apex-pdp | metrics.recording.level = INFO 23:16:56 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:56 kafka | [2024-02-05 23:14:22,706] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.54364621Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.909251ms 23:16:56 policy-db-migrator | 23:16:56 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:56 policy-pap | client.id = consumer-policy-pap-2 23:16:56 kafka | [2024-02-05 23:14:22,713] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.546654906Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:56 policy-pap | client.rack = 23:16:56 kafka | [2024-02-05 23:14:22,718] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.546726812Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=72.587µs 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 policy-pap | connections.max.idle.ms = 540000 23:16:56 kafka | [2024-02-05 23:14:22,723] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.55155077Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 policy-pap | default.api.timeout.ms = 60000 23:16:56 kafka | [2024-02-05 23:14:22,731] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.552229386Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=678.556µs 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:56 policy-pap | enable.auto.commit = true 23:16:56 kafka | [2024-02-05 23:14:22,733] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.556033562Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:56 policy-pap | exclude.internal.topics = true 23:16:56 kafka | [2024-02-05 23:14:22,733] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.556739153Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=705.261µs 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 policy-pap | fetch.max.bytes = 52428800 23:16:56 kafka | [2024-02-05 23:14:22,733] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.559460843Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:56 policy-pap | fetch.max.wait.ms = 500 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.559543912Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=83.039µs 23:16:56 kafka | [2024-02-05 23:14:22,734] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-pap | fetch.min.bytes = 1 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.562370816Z level=info msg="Executing migration" id="create annotation table v5" 23:16:56 kafka | [2024-02-05 23:14:22,735] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 policy-pap | group.id = policy-pap 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.563159567Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=785.53µs 23:16:56 kafka | [2024-02-05 23:14:22,737] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:56 policy-pap | group.instance.id = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.566952851Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:56 kafka | [2024-02-05 23:14:22,737] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 policy-pap | heartbeat.interval.ms = 3000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.567772757Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=819.637µs 23:16:56 kafka | [2024-02-05 23:14:22,738] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 policy-pap | interceptor.classes = [] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.572149695Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:56 kafka | [2024-02-05 23:14:22,738] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:56 policy-db-migrator | 23:16:56 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:56 policy-pap | internal.leave.group.on.close = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.573761912Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.616247ms 23:16:56 kafka | [2024-02-05 23:14:22,740] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:56 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:56 policy-apex-pdp | partitioner.class = null 23:16:56 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.632265704Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:56 kafka | [2024-02-05 23:14:22,743] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:56 policy-apex-pdp | partitioner.ignore.keys = false 23:16:56 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:56 policy-pap | isolation.level = read_uncommitted 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.633920421Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.657658ms 23:16:56 kafka | [2024-02-05 23:14:22,745] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:56 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:56 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:56 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.639494681Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:56 kafka | [2024-02-05 23:14:22,749] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:56 policy-apex-pdp | request.timeout.ms = 30000 23:16:56 policy-apex-pdp | retries = 2147483647 23:16:56 policy-pap | max.partition.fetch.bytes = 1048576 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.640694684Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.200993ms 23:16:56 kafka | [2024-02-05 23:14:22,749] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:56 policy-apex-pdp | retry.backoff.ms = 100 23:16:56 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:56 policy-pap | max.poll.interval.ms = 300000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.643540743Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:56 kafka | [2024-02-05 23:14:22,750] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:56 policy-apex-pdp | sasl.jaas.config = null 23:16:56 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 policy-pap | max.poll.records = 500 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.644691876Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.111404ms 23:16:56 kafka | [2024-02-05 23:14:22,752] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:56 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:56 policy-pap | metadata.max.age.ms = 300000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.647464578Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:56 kafka | [2024-02-05 23:14:22,753] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:56 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 policy-pap | metric.reporters = [] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.647493424Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=34.198µs 23:16:56 kafka | [2024-02-05 23:14:22,754] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:56 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:56 policy-apex-pdp | sasl.login.class = null 23:16:56 policy-pap | metrics.num.samples = 2 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.653843991Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:56 kafka | [2024-02-05 23:14:22,754] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:56 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:56 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:56 policy-pap | metrics.recording.level = INFO 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.660991369Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=7.139237ms 23:16:56 kafka | [2024-02-05 23:14:22,755] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:56 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:56 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:56 policy-pap | metrics.sample.window.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.664297904Z level=info msg="Executing migration" id="Drop category_id index" 23:16:56 kafka | [2024-02-05 23:14:22,757] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:56 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:56 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:56 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.665119681Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=821.717µs 23:16:56 kafka | [2024-02-05 23:14:22,757] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:56 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:56 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:56 policy-pap | receive.buffer.bytes = 65536 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.668104341Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:56 kafka | [2024-02-05 23:14:22,757] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:56 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:56 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:56 policy-pap | reconnect.backoff.max.ms = 1000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.6731474Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.041209ms 23:16:56 kafka | [2024-02-05 23:14:22,766] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 policy-pap | reconnect.backoff.ms = 50 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.682658107Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:56 kafka | [2024-02-05 23:14:22,768] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:56 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:56 policy-pap | request.timeout.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.68350692Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=852.694µs 23:16:56 kafka | [2024-02-05 23:14:22,768] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:56 policy-pap | retry.backoff.ms = 100 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.686958637Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:56 kafka | [2024-02-05 23:14:22,768] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 policy-pap | sasl.client.callback.handler.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.687753128Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=791.54µs 23:16:56 kafka | [2024-02-05 23:14:22,769] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 policy-pap | sasl.jaas.config = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.693295631Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:56 kafka | [2024-02-05 23:14:22,777] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.694367015Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.072064ms 23:16:56 kafka | [2024-02-05 23:14:22,777] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) 23:16:56 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:56 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.697849629Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:56 kafka | [2024-02-05 23:14:22,777] INFO Kafka startTimeMs: 1707174862770 (org.apache.kafka.common.utils.AppInfoParser) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:56 policy-pap | sasl.kerberos.service.name = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.714397959Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.543559ms 23:16:56 kafka | [2024-02-05 23:14:22,779] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:56 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:56 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.718069797Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:56 kafka | [2024-02-05 23:14:22,786] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:56 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.718729597Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=659.54µs 23:16:56 kafka | [2024-02-05 23:14:22,844] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:56 policy-pap | sasl.login.callback.handler.class = null 23:16:56 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:56 kafka | [2024-02-05 23:14:22,860] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.723913668Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.login.class = null 23:16:56 policy-apex-pdp | security.providers = null 23:16:56 kafka | [2024-02-05 23:14:22,874] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.725050527Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.140619ms 23:16:56 policy-pap | sasl.login.connect.timeout.ms = null 23:16:56 kafka | [2024-02-05 23:14:27,788] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.730871343Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:56 policy-apex-pdp | send.buffer.bytes = 131072 23:16:56 policy-pap | sasl.login.read.timeout.ms = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.731548108Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=681.266µs 23:16:56 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:56 kafka | [2024-02-05 23:14:27,789] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:56 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:56 kafka | [2024-02-05 23:14:58,039] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.73630018Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:56 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:56 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.737233554Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=933.423µs 23:16:56 policy-apex-pdp | ssl.cipher.suites = null 23:16:56 kafka | [2024-02-05 23:14:58,043] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:56 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:56 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 kafka | [2024-02-05 23:14:58,046] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.740449757Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:56 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:56 kafka | [2024-02-05 23:14:58,052] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.740620355Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=170.689µs 23:16:56 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:56 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:56 policy-apex-pdp | ssl.engine.factory.class = null 23:16:56 kafka | [2024-02-05 23:14:58,082] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(u46pnWTBR6-v7DJLPWifgQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.745909621Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:56 policy-apex-pdp | ssl.key.password = null 23:16:56 kafka | [2024-02-05 23:14:58,083] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.749992551Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.077079ms 23:16:56 policy-pap | sasl.mechanism = GSSAPI 23:16:56 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:56 kafka | [2024-02-05 23:14:58,085] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.75700985Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:56 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.76100212Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.992129ms 23:16:56 kafka | [2024-02-05 23:14:58,085] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:56 policy-apex-pdp | ssl.keystore.key = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.763818151Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:56 kafka | [2024-02-05 23:14:58,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:56 policy-apex-pdp | ssl.keystore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.764449075Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=630.243µs 23:16:56 kafka | [2024-02-05 23:14:58,088] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 policy-apex-pdp | ssl.keystore.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.768905751Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:56 kafka | [2024-02-05 23:14:58,108] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 policy-apex-pdp | ssl.keystore.type = JKS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.769512639Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=606.598µs 23:16:56 kafka | [2024-02-05 23:14:58,110] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.776067163Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:56 kafka | [2024-02-05 23:14:58,111] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 policy-apex-pdp | ssl.provider = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.77640594Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=339.688µs 23:16:56 kafka | [2024-02-05 23:14:58,113] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:56 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.781642303Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:56 kafka | [2024-02-05 23:14:58,113] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:56 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.78886935Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.214564ms 23:16:56 kafka | [2024-02-05 23:14:58,114] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:56 policy-apex-pdp | ssl.truststore.certificates = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.797089623Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:56 kafka | [2024-02-05 23:14:58,118] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:56 policy-pap | security.protocol = PLAINTEXT 23:16:56 policy-apex-pdp | ssl.truststore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.797854517Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=766.555µs 23:16:56 kafka | [2024-02-05 23:14:58,125] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | security.providers = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.800842488Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:56 kafka | [2024-02-05 23:14:58,128] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(csBd2HU8Tmiot-5BjYrBHg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:56 policy-apex-pdp | ssl.truststore.password = null 23:16:56 policy-pap | send.buffer.bytes = 131072 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.801096776Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=261.409µs 23:16:56 policy-apex-pdp | ssl.truststore.type = JKS 23:16:56 kafka | [2024-02-05 23:14:58,128] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | session.timeout.ms = 45000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.811022288Z level=info msg="Executing migration" id="Move region to single row" 23:16:56 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:56 kafka | [2024-02-05 23:14:58,128] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.811599659Z level=info msg="Migration successfully executed" id="Move region to single row" duration=577.822µs 23:16:56 policy-apex-pdp | transactional.id = null 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.816923873Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:56 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.cipher.suites = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.818280873Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.35699ms 23:16:56 policy-apex-pdp | 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.824397766Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.773+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:56 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.825199579Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=801.682µs 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | ssl.engine.factory.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.828446608Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | ssl.key.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.829366849Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=919.43µs 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174898815 23:16:56 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.837945034Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=331ed2f3-3c3d-4edb-a439-5458d1b7d3bd, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:56 policy-pap | ssl.keystore.certificate.chain = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.839041953Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.09704ms 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:56 policy-pap | ssl.keystore.key = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.842005608Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.815+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:56 policy-pap | ssl.keystore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.842975059Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=968.76µs 23:16:56 policy-db-migrator | 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.818+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:56 policy-pap | ssl.keystore.password = null 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.847620968Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:56 policy-db-migrator | 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.818+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:56 policy-pap | ssl.keystore.type = JKS 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.848503719Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=875.949µs 23:16:56 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:56 policy-pap | ssl.protocol = TLSv1.3 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.853575365Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.853714656Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=138.942µs 23:16:56 policy-pap | ssl.provider = null 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.857981799Z level=info msg="Executing migration" id="create test_data table" 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=447a3058-d755-46ac-8e2e-59b142489c6a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | ssl.secure.random.implementation = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.859048132Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.064442ms 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=447a3058-d755-46ac-8e2e-59b142489c6a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.864387259Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:56 policy-pap | ssl.truststore.certificates = null 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.820+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.865645515Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.259767ms 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.846+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | ssl.truststore.location = null 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.871644773Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:56 policy-apex-pdp | [] 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | ssl.truststore.password = null 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.872595719Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=950.597µs 23:16:56 policy-apex-pdp | [2024-02-05T23:14:58.852+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | ssl.truststore.type = JKS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.877500737Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e3e0d441-6486-4a92-a10b-385c33c9d2d1","timestampMs":1707174898823,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.879173448Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.67202ms 23:16:56 policy-pap | 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.024+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.882336088Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.024+00:00|INFO|ServiceManager|main] service manager starting 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:55.741+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.025+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.882812538Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=475.759µs 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:55.741+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.025+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 kafka | [2024-02-05 23:14:58,129] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.888751041Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:55.741+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174895741 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.035+00:00|INFO|ServiceManager|main] service manager started 23:16:56 kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.889267098Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=515.828µs 23:16:56 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:56 policy-pap | [2024-02-05T23:14:55.741+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.035+00:00|INFO|ServiceManager|main] service manager started 23:16:56 kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.896947109Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.035+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:56 kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:56.085+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.897051892Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=105.814µs 23:16:56 kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.035+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:56 policy-pap | [2024-02-05T23:14:56.277+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.902772206Z level=info msg="Executing migration" id="create team table" 23:16:56 kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.162+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ 23:16:56 policy-pap | [2024-02-05T23:14:56.523+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2c1a95a2, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1adf387e, org.springframework.security.web.context.SecurityContextHolderFilter@3909308c, org.springframework.security.web.header.HeaderWriterFilter@2e2cd42c, org.springframework.security.web.authentication.logout.LogoutFilter@4af44f2a, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5020e5ab, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3b6c740b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@78f4d15d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@72b53f27, org.springframework.security.web.access.ExceptionTranslationFilter@581d5b33, org.springframework.security.web.access.intercept.AuthorizationFilter@7db2b614] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.90437172Z level=info msg="Migration successfully executed" id="create team table" duration=1.591102ms 23:16:56 kafka | [2024-02-05 23:14:58,130] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.163+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ 23:16:56 policy-pap | [2024-02-05T23:14:57.357+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.909004036Z level=info msg="Executing migration" id="add index team.org_id" 23:16:56 kafka | [2024-02-05 23:14:58,131] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.164+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:56 policy-pap | [2024-02-05T23:14:57.452+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.910045583Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.041307ms 23:16:56 kafka | [2024-02-05 23:14:58,131] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.165+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:56 policy-pap | [2024-02-05T23:14:57.468+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:56 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.915456276Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:56 kafka | [2024-02-05 23:14:58,131] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.169+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] (Re-)joining group 23:16:56 policy-pap | [2024-02-05T23:14:57.484+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.917165046Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.706119ms 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.180+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Request joining group due to: need to re-join with the given member-id: consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d 23:16:56 policy-pap | [2024-02-05T23:14:57.484+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.922302216Z level=info msg="Executing migration" id="Add column uid in team" 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.180+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:56 policy-pap | [2024-02-05T23:14:57.484+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.927093788Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.792361ms 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.180+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] (Re-)joining group 23:16:56 policy-pap | [2024-02-05T23:14:57.485+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.932850699Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.629+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:56 policy-pap | [2024-02-05T23:14:57.485+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.93311297Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=288.286µs 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:14:59.629+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:56 policy-pap | [2024-02-05T23:14:57.485+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.939233275Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:56 policy-apex-pdp | [2024-02-05T23:15:02.185+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d', protocol='range'} 23:16:56 policy-pap | [2024-02-05T23:14:57.486+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:15:02.196+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Finished assignment for group at generation 1: {consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d=Assignment(partitions=[policy-pdp-pap-0])} 23:16:56 policy-pap | [2024-02-05T23:14:57.492+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=82113737-2238-440a-b31e-67419d0ce49a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@509e4902 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.940609158Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.374313ms 23:16:56 policy-apex-pdp | [2024-02-05T23:15:02.207+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d', protocol='range'} 23:16:56 policy-pap | [2024-02-05T23:14:57.502+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=82113737-2238-440a-b31e-67419d0ce49a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.944908248Z level=info msg="Executing migration" id="create team member table" 23:16:56 policy-apex-pdp | [2024-02-05T23:15:02.208+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:56 policy-pap | [2024-02-05T23:14:57.502+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.946005357Z level=info msg="Migration successfully executed" id="create team member table" duration=1.096359ms 23:16:56 policy-apex-pdp | [2024-02-05T23:15:02.210+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Adding newly assigned partitions: policy-pdp-pap-0 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.951992902Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:56 policy-pap | allow.auto.create.topics = true 23:16:56 policy-apex-pdp | [2024-02-05T23:15:02.219+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Found no committed offset for partition policy-pdp-pap-0 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.952921024Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=931.043µs 23:16:56 policy-pap | auto.commit.interval.ms = 5000 23:16:56 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:56 policy-apex-pdp | [2024-02-05T23:15:02.232+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2, groupId=447a3058-d755-46ac-8e2e-59b142489c6a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.959854763Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:56 policy-pap | auto.include.jmx.reporter = true 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | [2024-02-05T23:15:18.821+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.961499559Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.643945ms 23:16:56 policy-pap | auto.offset.reset = latest 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce579082-502e-4e58-9380-d7baab3a6748","timestampMs":1707174918820,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.966007915Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:56 policy-pap | bootstrap.servers = [kafka:9092] 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-apex-pdp | [2024-02-05T23:15:18.842+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.967352422Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.344267ms 23:16:56 policy-pap | check.crcs = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.970130765Z level=info msg="Executing migration" id="Add column email to team table" 23:16:56 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce579082-502e-4e58-9380-d7baab3a6748","timestampMs":1707174918820,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 kafka | [2024-02-05 23:14:58,132] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:56 policy-pap | client.id = consumer-82113737-2238-440a-b31e-67419d0ce49a-3 23:16:56 policy-apex-pdp | [2024-02-05T23:15:18.845+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.975024771Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.893585ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | client.rack = 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.011+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.980803808Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | connections.max.idle.ms = 540000 23:16:56 policy-apex-pdp | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"687caf5b-4d92-42de-acdb-f82aab7cc43c","timestampMs":1707174918952,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.985617164Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.812396ms 23:16:56 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:56 policy-pap | default.api.timeout.ms = 60000 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.023+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.989881016Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | enable.auto.commit = true 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"73abfafa-5e87-480f-971c-84c352b572be","timestampMs":1707174919022,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.994709416Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.827991ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:56 policy-pap | exclude.internal.topics = true 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.023+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.998350416Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | fetch.max.bytes = 52428800 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.026+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:19.999357085Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.006409ms 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.006853361Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:56 policy-pap | fetch.max.wait.ms = 500 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"687caf5b-4d92-42de-acdb-f82aab7cc43c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"97dae9b3-163a-482b-805c-f915fcf0db7a","timestampMs":1707174919025,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.00772592Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=872.538µs 23:16:56 policy-pap | fetch.min.bytes = 1 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.039+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.010857192Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:56 policy-pap | group.id = 82113737-2238-440a-b31e-67419d0ce49a 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"73abfafa-5e87-480f-971c-84c352b572be","timestampMs":1707174919022,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 kafka | [2024-02-05 23:14:58,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.012518231Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.660428ms 23:16:56 policy-pap | group.instance.id = null 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.039+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.015947751Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:56 policy-pap | heartbeat.interval.ms = 3000 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.042+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.017453804Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.505243ms 23:16:56 policy-pap | interceptor.classes = [] 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"687caf5b-4d92-42de-acdb-f82aab7cc43c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"97dae9b3-163a-482b-805c-f915fcf0db7a","timestampMs":1707174919025,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.023995122Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:56 policy-pap | internal.leave.group.on.close = true 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.042+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.024847746Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=852.624µs 23:16:56 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.079+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.027710668Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:56 policy-pap | isolation.level = read_uncommitted 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"54fa79c2-9992-4631-b988-2a9cecf2df7f","timestampMs":1707174918953,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.029075389Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.364121ms 23:16:56 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.081+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.033198547Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:56 policy-pap | max.partition.fetch.bytes = 1048576 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"54fa79c2-9992-4631-b988-2a9cecf2df7f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d0f9af2d-ed17-4886-8d33-16b52a441775","timestampMs":1707174919081,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.034605478Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.40594ms 23:16:56 policy-pap | max.poll.interval.ms = 300000 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.088+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.042775888Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:56 policy-pap | max.poll.records = 500 23:16:56 kafka | [2024-02-05 23:14:58,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"54fa79c2-9992-4631-b988-2a9cecf2df7f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d0f9af2d-ed17-4886-8d33-16b52a441775","timestampMs":1707174919081,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.044283241Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.498271ms 23:16:56 policy-pap | metadata.max.age.ms = 300000 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.088+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.049021109Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:56 policy-pap | metric.reporters = [] 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.115+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.049781212Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=762.433µs 23:16:56 policy-pap | metrics.num.samples = 2 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39c1570c-dd05-4983-bede-a1e58213f1cf","timestampMs":1707174919093,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.055920809Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.117+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:56 policy-pap | metrics.recording.level = INFO 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.056258007Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=337.407µs 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39c1570c-dd05-4983-bede-a1e58213f1cf","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"596c9de2-f9f1-49de-b6fb-fa0fd5c472b5","timestampMs":1707174919116,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-pap | metrics.sample.window.ms = 30000 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.104703054Z level=info msg="Executing migration" id="create tag table" 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.125+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.105420877Z level=info msg="Migration successfully executed" id="create tag table" duration=717.323µs 23:16:56 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39c1570c-dd05-4983-bede-a1e58213f1cf","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"596c9de2-f9f1-49de-b6fb-fa0fd5c472b5","timestampMs":1707174919116,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-pap | receive.buffer.bytes = 65536 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.110398361Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:56 policy-apex-pdp | [2024-02-05T23:15:19.125+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:56 policy-pap | reconnect.backoff.max.ms = 1000 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.11136145Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=962.909µs 23:16:56 policy-apex-pdp | [2024-02-05T23:15:56.161+00:00|INFO|RequestLog|qtp830863979-33] 172.17.0.2 - policyadmin [05/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.49.1" 23:16:56 policy-pap | reconnect.backoff.ms = 50 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.118241226Z level=info msg="Executing migration" id="create login attempt table" 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-pap | request.timeout.ms = 30000 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.119104282Z level=info msg="Migration successfully executed" id="create login attempt table" duration=862.666µs 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-pap | retry.backoff.ms = 100 23:16:56 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.122180703Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:56 policy-pap | sasl.client.callback.handler.class = null 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.124270078Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=2.092097ms 23:16:56 policy-pap | sasl.jaas.config = null 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.129638451Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:56 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.130615453Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=975.082µs 23:16:56 policy-pap | sasl.kerberos.service.name = null 23:16:56 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.135402643Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:56 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.157458643Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=22.05354ms 23:16:56 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.162306796Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:56 policy-pap | sasl.login.callback.handler.class = null 23:16:56 kafka | [2024-02-05 23:14:58,143] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.16302513Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=720.115µs 23:16:56 policy-pap | sasl.login.class = null 23:16:56 kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.166129817Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:56 policy-pap | sasl.login.connect.timeout.ms = null 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.16706804Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=937.433µs 23:16:56 policy-pap | sasl.login.read.timeout.ms = null 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.17809363Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:56 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.178440529Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=346.708µs 23:16:56 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.184085224Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:56 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:56 kafka | [2024-02-05 23:14:58,145] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.185110087Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.024373ms 23:16:56 kafka | [2024-02-05 23:14:58,146] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.190206467Z level=info msg="Executing migration" id="create user auth table" 23:16:56 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.191396208Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.189162ms 23:16:56 kafka | [2024-02-05 23:14:58,146] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.198110627Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:56 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:56 kafka | [2024-02-05 23:14:58,146] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-pap | sasl.mechanism = GSSAPI 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.199112834Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.001818ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,146] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.203305639Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:56 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:56 kafka | [2024-02-05 23:14:58,146] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.203523339Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=218.07µs 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,156] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.212556124Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:56 kafka | [2024-02-05 23:14:58,162] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 kafka | [2024-02-05 23:14:58,163] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.218563722Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.006817ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 kafka | [2024-02-05 23:14:58,259] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.222982478Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 kafka | [2024-02-05 23:14:58,274] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.231310514Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.324755ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 kafka | [2024-02-05 23:14:58,276] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.23739955Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:56 kafka | [2024-02-05 23:14:58,276] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.24262947Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.230361ms 23:16:56 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:56 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:56 kafka | [2024-02-05 23:14:58,278] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(u46pnWTBR6-v7DJLPWifgQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.245694428Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:56 kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.250729204Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.034505ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 policy-pap | security.protocol = PLAINTEXT 23:16:56 kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.255929048Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | security.providers = null 23:16:56 kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.256639819Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=710.982µs 23:16:56 policy-db-migrator | 23:16:56 policy-pap | send.buffer.bytes = 131072 23:16:56 kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-pap | session.timeout.ms = 45000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.261540525Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:56 kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.266727696Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.18634ms 23:16:56 kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.272129176Z level=info msg="Executing migration" id="create server_lock table" 23:16:56 kafka | [2024-02-05 23:14:58,281] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.272936059Z level=info msg="Migration successfully executed" id="create server_lock table" duration=806.643µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.278899446Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:56 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:56 policy-pap | ssl.cipher.suites = null 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.28080177Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.902464ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.288747567Z level=info msg="Executing migration" id="create user auth token table" 23:16:56 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.289693113Z level=info msg="Migration successfully executed" id="create user auth token table" duration=948.046µs 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-pap | ssl.engine.factory.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.293315857Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-pap | ssl.key.password = null 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.294526564Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.210466ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.30510269Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:56 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.306544209Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.441198ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.keystore.certificate.chain = null 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.310762939Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.keystore.key = null 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.312580103Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.817114ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.keystore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.318402448Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:56 policy-pap | ssl.keystore.password = null 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.323824942Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.421604ms 23:16:56 policy-pap | ssl.keystore.type = JKS 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.371115447Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:56 policy-pap | ssl.protocol = TLSv1.3 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.374016847Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=2.895849ms 23:16:56 policy-pap | ssl.provider = null 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.379861948Z level=info msg="Executing migration" id="create cache_data table" 23:16:56 policy-pap | ssl.secure.random.implementation = null 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.380607217Z level=info msg="Migration successfully executed" id="create cache_data table" duration=745.15µs 23:16:56 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:56 policy-pap | ssl.truststore.certificates = null 23:16:56 policy-pap | ssl.truststore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.387984657Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.truststore.password = null 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.388904105Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=911.257µs 23:16:56 policy-pap | ssl.truststore.type = JKS 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.394703585Z level=info msg="Executing migration" id="create short_url table v1" 23:16:56 kafka | [2024-02-05 23:14:58,282] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.396093202Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.387846ms 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.401567438Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:56 policy-pap | [2024-02-05T23:14:57.508+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.403601251Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.033092ms 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.408946017Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.508+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.409023295Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=77.478µs 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.508+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174897508 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.415685322Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:56 policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Subscribed to topic(s): policy-pdp-pap 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.41584986Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=153.035µs 23:16:56 policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.421823219Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:56 policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cd7670e3-0a24-44c8-9ed2-b9e3c70e4f45, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f190cfe 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.423458691Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.639143ms 23:16:56 policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cd7670e3-0a24-44c8-9ed2-b9e3c70e4f45, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.430916428Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:56 policy-pap | [2024-02-05T23:14:57.509+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.433541726Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.612574ms 23:16:56 policy-pap | allow.auto.create.topics = true 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.43896021Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:56 policy-pap | auto.commit.interval.ms = 5000 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.440047877Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.086877ms 23:16:56 policy-pap | auto.include.jmx.reporter = true 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.445634378Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:56 policy-pap | auto.offset.reset = latest 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.445710196Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=77.467µs 23:16:56 policy-pap | bootstrap.servers = [kafka:9092] 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.450965862Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-pap | check.crcs = true 23:16:56 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.452010451Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.044138ms 23:16:56 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.457053308Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | client.id = consumer-policy-pap-4 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.458344862Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.291023ms 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 policy-pap | client.rack = 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.467001262Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | connections.max.idle.ms = 540000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.468053671Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.060121ms 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | default.api.timeout.ms = 60000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.472851274Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:56 kafka | [2024-02-05 23:14:58,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | enable.auto.commit = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.473573278Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=721.654µs 23:16:56 kafka | [2024-02-05 23:14:58,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:56 policy-pap | exclude.internal.topics = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.481559306Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:56 kafka | [2024-02-05 23:14:58,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | fetch.max.bytes = 52428800 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.485738938Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.180613ms 23:16:56 kafka | [2024-02-05 23:14:58,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:56 policy-pap | fetch.max.wait.ms = 500 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.490531148Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:56 kafka | [2024-02-05 23:14:58,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | fetch.min.bytes = 1 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.491400166Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=868.309µs 23:16:56 kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | group.id = policy-pap 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.495925266Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:56 kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | group.instance.id = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.496096545Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=171.799µs 23:16:56 kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:56 policy-pap | heartbeat.interval.ms = 3000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.500170082Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:56 kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | interceptor.classes = [] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.501471098Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.300935ms 23:16:56 kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:56 policy-pap | internal.leave.group.on.close = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.561199204Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:56 kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.563134864Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.93911ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | isolation.level = read_uncommitted 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.567642901Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:56 kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.568589276Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=945.985µs 23:16:56 kafka | [2024-02-05 23:14:58,284] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:56 policy-pap | max.partition.fetch.bytes = 1048576 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.5729094Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | max.poll.interval.ms = 300000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.572977415Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=68.916µs 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 policy-pap | max.poll.records = 500 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.577684977Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | metadata.max.age.ms = 300000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.579197711Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.522236ms 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | metric.reporters = [] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.586181281Z level=info msg="Executing migration" id="create alert_instance table" 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | metrics.num.samples = 2 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.587140949Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=959.308µs 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:56 policy-pap | metrics.recording.level = INFO 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.590914848Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | metrics.sample.window.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.592356636Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.440878ms 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:56 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.599682973Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | receive.buffer.bytes = 65536 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.60067591Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=992.797µs 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | reconnect.backoff.max.ms = 1000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.604651285Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | reconnect.backoff.ms = 50 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.609269106Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.618142ms 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:56 policy-pap | request.timeout.ms = 30000 23:16:56 kafka | [2024-02-05 23:14:58,285] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.613366148Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | retry.backoff.ms = 100 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.614110348Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=743.43µs 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:56 policy-pap | sasl.client.callback.handler.class = null 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.617877946Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.618603641Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=725.465µs 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:56 policy-pap | sasl.jaas.config = null 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:56 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.620989894Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:56 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.656188747Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=35.194821ms 23:16:56 policy-pap | sasl.kerberos.service.name = null 23:16:56 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.660184295Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:56 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.693363488Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=33.172581ms 23:16:56 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.696856763Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:56 policy-pap | sasl.login.callback.handler.class = null 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.697643121Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=783.348µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:56 policy-pap | sasl.login.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.701651514Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:56 policy-pap | sasl.login.connect.timeout.ms = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.703316183Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.668089ms 23:16:56 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:56 policy-pap | sasl.login.read.timeout.ms = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.706425941Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.713676071Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.25106ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.717635963Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.723182246Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.545952ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.727538037Z level=info msg="Executing migration" id="create alert_rule table" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:56 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.728386229Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=847.963µs 23:16:56 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:56 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.73300301Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:56 policy-pap | sasl.mechanism = GSSAPI 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.73409969Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.09908ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.738214647Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.739944811Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.729474ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.743916585Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.745795883Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.875478ms 23:16:56 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.748880015Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.748975427Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=95.801µs 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.752119502Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:56 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.759923729Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.802256ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:56 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.764464131Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:56 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.768711119Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.246737ms 23:16:56 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:56 policy-pap | security.protocol = PLAINTEXT 23:16:56 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.784913567Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | security.providers = null 23:16:56 kafka | [2024-02-05 23:14:58,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.795285668Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=10.364068ms 23:16:56 kafka | [2024-02-05 23:14:58,287] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 23:16:56 policy-pap | send.buffer.bytes = 131072 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.799237557Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 kafka | [2024-02-05 23:14:58,287] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 23:16:56 policy-pap | session.timeout.ms = 45000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.799915181Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=677.274µs 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.802832235Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.803549958Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=717.473µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.cipher.suites = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.807955302Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:56 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.816228935Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.274634ms 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.819682941Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.engine.factory.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.826873878Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.185066ms 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.key.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.830894743Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.83198391Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.088988ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.keystore.certificate.chain = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.836332561Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:56 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:56 kafka | [2024-02-05 23:14:58,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.keystore.key = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.842661951Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.328861ms 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.keystore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.848286972Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.keystore.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.854894677Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.607255ms 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.keystore.type = JKS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.860854553Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.protocol = TLSv1.3 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.860920718Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=67.005µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.provider = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.863942146Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:56 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.secure.random.implementation = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.864882779Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=937.453µs 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.869168395Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.truststore.certificates = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.870166842Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=998.077µs 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.truststore.location = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.874032172Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.truststore.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.875113188Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.080687ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | ssl.truststore.type = JKS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.879123621Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.879184565Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=61.474µs 23:16:56 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.884554407Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.890682762Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.128185ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:56 policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.894679913Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174897514 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.900737201Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.062629ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.90376542Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.909809936Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.042045ms 23:16:56 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.514+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cd7670e3-0a24-44c8-9ed2-b9e3c70e4f45, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.916126974Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.515+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=82113737-2238-440a-b31e-67419d0ce49a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.922152386Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.025163ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.515+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8340479e-6066-4c0d-8cea-5ee1d125717a, alive=false, publisher=null]]: starting 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.9260363Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.540+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.932050428Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.013959ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | acks = -1 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.937178986Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | auto.include.jmx.reporter = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.937242Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=63.755µs 23:16:56 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | batch.size = 16384 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.94264359Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | bootstrap.servers = [kafka:9092] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.943323644Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=679.715µs 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | buffer.memory = 33554432 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.948415114Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.954530155Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.112361ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | client.id = producer-1 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.958326109Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | compression.type = none 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.958388313Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=62.874µs 23:16:56 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | connections.max.idle.ms = 540000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.964716084Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | delivery.timeout.ms = 120000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.970588091Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.873806ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | enable.idempotence = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.976231575Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | interceptor.classes = [] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:20.977009012Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=777.186µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.015717227Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | linger.ms = 0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.022014207Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.29672ms 23:16:56 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | max.block.ms = 60000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.025692922Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | max.in.flight.requests.per.connection = 5 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.026397493Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=704.311µs 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | max.request.size = 1048576 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.032696323Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:56 policy-pap | metadata.max.age.ms = 300000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.033660022Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=963.428µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,290] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:56 policy-pap | metadata.max.idle.ms = 300000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.038050079Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,294] INFO [Broker id=1] Finished LeaderAndIsr request in 178ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:16:56 policy-pap | metric.reporters = [] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.046356595Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.312998ms 23:16:56 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:56 kafka | [2024-02-05 23:14:58,299] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=u46pnWTBR6-v7DJLPWifgQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:56 policy-pap | metrics.num.samples = 2 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.050075359Z level=info msg="Executing migration" id="create provenance_type table" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:56 policy-pap | metrics.recording.level = INFO 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.050600869Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=525.389µs 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,307] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:56 policy-pap | metrics.sample.window.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.055172797Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,308] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:56 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.056138176Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=964.989µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,312] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:16:56 policy-pap | partitioner.availability.timeout.ms = 0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.062328112Z level=info msg="Executing migration" id="create alert_image table" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | partitioner.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.06311286Z level=info msg="Migration successfully executed" id="create alert_image table" duration=784.328µs 23:16:56 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | partitioner.ignore.keys = false 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.071563139Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | receive.buffer.bytes = 32768 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.072660448Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.10182ms 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 policy-pap | reconnect.backoff.max.ms = 1000 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.076486247Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | reconnect.backoff.ms = 50 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.07653989Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=49.661µs 23:16:56 policy-db-migrator | 23:16:56 policy-pap | request.timeout.ms = 30000 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.080600301Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | retries = 2147483647 23:16:56 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.081260231Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=661.25µs 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | retry.backoff.ms = 100 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.084831082Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.client.callback.handler.class = null 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.085671303Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=844.071µs 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.jaas.config = null 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.089513755Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.089974449Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.094342282Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.kerberos.service.name = null 23:16:56 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.094652362Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=310.03µs 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.0974172Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.098185344Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=767.644µs 23:16:56 kafka | [2024-02-05 23:14:58,313] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.callback.handler.class = null 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.101055486Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.class = null 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.105951498Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.895572ms 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.connect.timeout.ms = null 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.110340194Z level=info msg="Executing migration" id="create library_element table v1" 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.read.timeout.ms = null 23:16:56 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.111004195Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=663.741µs 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.113905274Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.114890117Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=986.103µs 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.117813361Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.118424471Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=612.03µs 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.122719375Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:56 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.123438298Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=718.453µs 23:16:56 policy-pap | sasl.mechanism = GSSAPI 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.126466566Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:56 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.127179548Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=712.633µs 23:16:56 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.130007891Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:56 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.130026485Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=19.265µs 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.134218576Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.134268398Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=50.171µs 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.137073074Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.137511154Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=438.129µs 23:16:56 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.141301895Z level=info msg="Executing migration" id="create data_keys table" 23:16:56 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.143094331Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.793567ms 23:16:56 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:56 kafka | [2024-02-05 23:14:58,314] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.146324446Z level=info msg="Executing migration" id="create secrets table" 23:16:56 policy-pap | security.protocol = PLAINTEXT 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.147147473Z level=info msg="Migration successfully executed" id="create secrets table" duration=822.506µs 23:16:56 policy-pap | security.providers = null 23:16:56 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.151933249Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:56 policy-pap | send.buffer.bytes = 131072 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.201697208Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=49.763599ms 23:16:56 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.205908915Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:56 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.211651469Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.737212ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.cipher.suites = null 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.214731048Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.214839473Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=108.464µs 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.221686057Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:56 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:56 policy-pap | ssl.engine.factory.class = null 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.274116952Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=52.425754ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.key.password = null 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.28899372Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:56 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.334953697Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=45.960516ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.keystore.certificate.chain = null 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.342261196Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.keystore.key = null 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.343738371Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.480047ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.keystore.location = null 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.35363974Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:56 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:56 policy-pap | ssl.keystore.password = null 23:16:56 kafka | [2024-02-05 23:14:58,315] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.354836721Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.197032ms 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:56 policy-pap | ssl.keystore.type = JKS 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.358324833Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.protocol = TLSv1.3 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.358612658Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=287.405µs 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.provider = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.364049843Z level=info msg="Executing migration" id="create permission table" 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.secure.random.implementation = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.364973124Z level=info msg="Migration successfully executed" id="create permission table" duration=922.66µs 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:56 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.374756984Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.truststore.certificates = null 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.375777866Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.024812ms 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:56 policy-pap | ssl.truststore.location = null 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.380653103Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.truststore.password = null 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.381794133Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.140749ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.truststore.type = JKS 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.386225189Z level=info msg="Executing migration" id="create role table" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | transaction.timeout.ms = 60000 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.387079752Z level=info msg="Migration successfully executed" id="create role table" duration=858.954µs 23:16:56 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:56 policy-pap | transactional.id = null 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.389993685Z level=info msg="Executing migration" id="add column display_name" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.398720696Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.723191ms 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:56 policy-pap | 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.402337527Z level=info msg="Executing migration" id="add column group_name" 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:57.552+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.408004864Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.663736ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.417031783Z level=info msg="Executing migration" id="add index role.org_id" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.419082889Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=2.054367ms 23:16:56 policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174897569 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.423230071Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:56 policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8340479e-6066-4c0d-8cea-5ee1d125717a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.424262126Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.031985ms 23:16:56 policy-pap | [2024-02-05T23:14:57.569+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5cac03a0-b751-4443-b270-6b6ceb5efee2, alive=false, publisher=null]]: starting 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.427442138Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:57.570+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.428483454Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.041126ms 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:56 policy-pap | acks = -1 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.506971266Z level=info msg="Executing migration" id="create team role table" 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:56 policy-pap | auto.include.jmx.reporter = true 23:16:56 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.508346839Z level=info msg="Migration successfully executed" id="create team role table" duration=1.375753ms 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:56 policy-pap | batch.size = 16384 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.511987875Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:56 policy-pap | bootstrap.servers = [kafka:9092] 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.514230004Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.241449ms 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:56 policy-pap | buffer.memory = 33554432 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.517627195Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:56 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.518806012Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.182318ms 23:16:56 policy-pap | client.id = producer-2 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.524754074Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:56 policy-pap | compression.type = none 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.525746259Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=995.826µs 23:16:56 policy-pap | connections.max.idle.ms = 540000 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.530303324Z level=info msg="Executing migration" id="create user role table" 23:16:56 policy-pap | delivery.timeout.ms = 120000 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.532055411Z level=info msg="Migration successfully executed" id="create user role table" duration=1.754838ms 23:16:56 policy-pap | enable.idempotence = true 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.539599815Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:56 policy-pap | interceptor.classes = [] 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.540986429Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.387364ms 23:16:56 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.544644451Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:56 policy-pap | linger.ms = 0 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.546172567Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.527536ms 23:16:56 policy-pap | max.block.ms = 60000 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.550072693Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:56 policy-pap | max.in.flight.requests.per.connection = 5 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.551330808Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.257205ms 23:16:56 policy-pap | max.request.size = 1048576 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.559250236Z level=info msg="Executing migration" id="create builtin role table" 23:16:56 policy-pap | metadata.max.age.ms = 300000 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.560349396Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.10073ms 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.563731614Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:56 policy-pap | metadata.max.idle.ms = 300000 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.564820521Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.088917ms 23:16:56 policy-pap | metric.reporters = [] 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.573768443Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:56 policy-pap | metrics.num.samples = 2 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.575147377Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.383014ms 23:16:56 policy-pap | metrics.recording.level = INFO 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.579880711Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:56 policy-pap | metrics.sample.window.ms = 30000 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.587550193Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.669222ms 23:16:56 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.590716231Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:56 policy-pap | partitioner.availability.timeout.ms = 0 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.591734593Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.01256ms 23:16:56 policy-pap | partitioner.class = null 23:16:56 kafka | [2024-02-05 23:14:58,340] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.597105572Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:56 policy-pap | partitioner.ignore.keys = false 23:16:56 kafka | [2024-02-05 23:14:58,341] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:56 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.598172374Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.066693ms 23:16:56 policy-pap | receive.buffer.bytes = 32768 23:16:56 kafka | [2024-02-05 23:14:58,341] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.601385774Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:56 policy-pap | reconnect.backoff.max.ms = 1000 23:16:56 kafka | [2024-02-05 23:14:58,342] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.602715385Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.329392ms 23:16:56 policy-pap | reconnect.backoff.ms = 50 23:16:56 kafka | [2024-02-05 23:14:58,346] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.606148436Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:56 policy-pap | request.timeout.ms = 30000 23:16:56 kafka | [2024-02-05 23:14:58,347] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.607282793Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.137608ms 23:16:56 policy-pap | retries = 2147483647 23:16:56 kafka | [2024-02-05 23:14:58,347] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.61237533Z level=info msg="Executing migration" id="create seed assignment table" 23:16:56 policy-pap | retry.backoff.ms = 100 23:16:56 kafka | [2024-02-05 23:14:58,348] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.613069757Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=694.257µs 23:16:56 policy-pap | sasl.client.callback.handler.class = null 23:16:56 kafka | [2024-02-05 23:14:58,348] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.61603651Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:56 policy-pap | sasl.jaas.config = null 23:16:56 kafka | [2024-02-05 23:14:58,356] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.61713466Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.09786ms 23:16:56 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:56 kafka | [2024-02-05 23:14:58,356] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.620400792Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:56 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,357] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.628403629Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.002467ms 23:16:56 policy-pap | sasl.kerberos.service.name = null 23:16:56 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:56 kafka | [2024-02-05 23:14:58,357] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.634152555Z level=info msg="Executing migration" id="permission kind migration" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,357] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.641073856Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.921621ms 23:16:56 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:56 kafka | [2024-02-05 23:14:58,369] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | sasl.login.callback.handler.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.643873351Z level=info msg="Executing migration" id="permission attribute migration" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,371] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | sasl.login.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.650575944Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.701802ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,371] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.login.connect.timeout.ms = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.653981077Z level=info msg="Executing migration" id="permission identifier migration" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,371] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.login.read.timeout.ms = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.662908624Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.926547ms 23:16:56 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:56 kafka | [2024-02-05 23:14:58,371] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.667780751Z level=info msg="Executing migration" id="add permission identifier index" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,382] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.668519398Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=738.468µs 23:16:56 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 kafka | [2024-02-05 23:14:58,383] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.671596126Z level=info msg="Executing migration" id="create query_history table v1" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,383] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.672600534Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.002327ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,383] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.678345758Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,383] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.680267865Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.926168ms 23:16:56 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:56 kafka | [2024-02-05 23:14:58,392] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | sasl.mechanism = GSSAPI 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.683451598Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,392] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.683516673Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=65.855µs 23:16:56 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 kafka | [2024-02-05 23:14:58,392] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.694648541Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,393] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.694740261Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=90.05µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,393] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.700395565Z level=info msg="Executing migration" id="teams permissions migration" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,400] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.700782563Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=386.818µs 23:16:56 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:56 kafka | [2024-02-05 23:14:58,400] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.70305783Z level=info msg="Executing migration" id="dashboard permissions" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,400] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.703531207Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=473.888µs 23:16:56 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 kafka | [2024-02-05 23:14:58,400] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.706150262Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,400] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.70666345Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=513.057µs 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,409] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.709716003Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,410] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | security.protocol = PLAINTEXT 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.710010539Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=288.456µs 23:16:56 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:56 kafka | [2024-02-05 23:14:58,410] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:56 policy-pap | security.providers = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.715266003Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,410] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | send.buffer.bytes = 131072 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.715893575Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=622.881µs 23:16:56 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 kafka | [2024-02-05 23:14:58,410] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.719156946Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,417] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.720540899Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.383314ms 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,417] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | ssl.cipher.suites = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.724070872Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,417] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:56 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.72520738Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.136218ms 23:16:56 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:56 kafka | [2024-02-05 23:14:58,417] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.72908284Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,417] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | ssl.engine.factory.class = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.73744922Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.365959ms 23:16:56 policy-pap | ssl.key.password = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.742600118Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:56 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 kafka | [2024-02-05 23:14:58,423] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.742713144Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=113.286µs 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,424] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | ssl.keystore.certificate.chain = null 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.745647971Z level=info msg="Executing migration" id="create correlation table v1" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.keystore.key = null 23:16:56 kafka | [2024-02-05 23:14:58,424] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.746637116Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=990.755µs 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.keystore.location = null 23:16:56 kafka | [2024-02-05 23:14:58,425] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.749812917Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:56 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:56 policy-pap | ssl.keystore.password = null 23:16:56 kafka | [2024-02-05 23:14:58,425] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.751051658Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.238521ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.keystore.type = JKS 23:16:56 kafka | [2024-02-05 23:14:58,435] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.758721479Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:56 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 policy-pap | ssl.protocol = TLSv1.3 23:16:56 kafka | [2024-02-05 23:14:58,436] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.760339297Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.617257ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.provider = null 23:16:56 kafka | [2024-02-05 23:14:58,436] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.763252038Z level=info msg="Executing migration" id="add correlation config column" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.secure.random.implementation = null 23:16:56 kafka | [2024-02-05 23:14:58,436] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.774872577Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.621448ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:56 kafka | [2024-02-05 23:14:58,436] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.778167495Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:56 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:56 policy-pap | ssl.truststore.certificates = null 23:16:56 kafka | [2024-02-05 23:14:58,442] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.779738112Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.568947ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.truststore.location = null 23:16:56 kafka | [2024-02-05 23:14:58,443] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.784065524Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:56 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 policy-pap | ssl.truststore.password = null 23:16:56 kafka | [2024-02-05 23:14:58,443] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.785262647Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.196842ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | ssl.truststore.type = JKS 23:16:56 kafka | [2024-02-05 23:14:58,443] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.789306054Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | transaction.timeout.ms = 60000 23:16:56 kafka | [2024-02-05 23:14:58,443] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.820896527Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=31.589082ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | transactional.id = null 23:16:56 kafka | [2024-02-05 23:14:58,451] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.823735412Z level=info msg="Executing migration" id="create correlation v2" 23:16:56 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:56 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:56 kafka | [2024-02-05 23:14:58,451] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.824470899Z level=info msg="Migration successfully executed" id="create correlation v2" duration=732.997µs 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | 23:16:56 kafka | [2024-02-05 23:14:58,451] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.829552913Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:56 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 policy-pap | [2024-02-05T23:14:57.570+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:56 kafka | [2024-02-05 23:14:58,451] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.830760297Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.207045ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:16:56 kafka | [2024-02-05 23:14:58,452] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.835718003Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:56 policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:16:56 kafka | [2024-02-05 23:14:58,462] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.837706284Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.990481ms 23:16:56 policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1707174897573 23:16:56 kafka | [2024-02-05 23:14:58,463] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.843687472Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.844867931Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.180309ms 23:16:56 policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5cac03a0-b751-4443-b270-6b6ceb5efee2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:56 kafka | [2024-02-05 23:14:58,463] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:56 kafka | [2024-02-05 23:14:58,463] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.848128931Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:56 policy-pap | [2024-02-05T23:14:57.573+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:56 kafka | [2024-02-05 23:14:58,463] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.848577103Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=447.782µs 23:16:56 policy-pap | [2024-02-05T23:14:57.576+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:56 kafka | [2024-02-05 23:14:58,470] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:57.577+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:56 kafka | [2024-02-05 23:14:58,470] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.852017243Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:57.585+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:56 kafka | [2024-02-05 23:14:58,470] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.853282871Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.265118ms 23:16:56 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:56 policy-pap | [2024-02-05T23:14:57.585+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:56 kafka | [2024-02-05 23:14:58,470] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.858809216Z level=info msg="Executing migration" id="add provisioning column" 23:16:56 policy-pap | [2024-02-05T23:14:57.585+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:56 kafka | [2024-02-05 23:14:58,470] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.868379199Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.569372ms 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:57.586+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:56 kafka | [2024-02-05 23:14:58,477] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.871506779Z level=info msg="Executing migration" id="create entity_events table" 23:16:56 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 policy-pap | [2024-02-05T23:14:57.586+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:56 kafka | [2024-02-05 23:14:58,478] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.872311452Z level=info msg="Migration successfully executed" id="create entity_events table" duration=804.454µs 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:57.586+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:56 kafka | [2024-02-05 23:14:58,478] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.875667744Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:57.587+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:56 kafka | [2024-02-05 23:14:58,478] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.876856254Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.187989ms 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:57.587+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.666 seconds (process running for 11.332) 23:16:56 kafka | [2024-02-05 23:14:58,478] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.883585032Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:56 policy-pap | [2024-02-05T23:14:58.030+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.884155971Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:56 kafka | [2024-02-05 23:14:58,485] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:14:58.031+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ 23:16:56 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.888602191Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:56 kafka | [2024-02-05 23:14:58,485] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:14:58.031+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.889157648Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:56 kafka | [2024-02-05 23:14:58,485] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:14:58.031+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ 23:16:56 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.892676776Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:56 kafka | [2024-02-05 23:14:58,485] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:14:58.068+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.893504164Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=827.157µs 23:16:56 kafka | [2024-02-05 23:14:58,486] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:58.068+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.898832064Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:56 kafka | [2024-02-05 23:14:58,493] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:14:58.074+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.899797122Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=964.859µs 23:16:56 kafka | [2024-02-05 23:14:58,494] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:14:58.075+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: GFmMeC8ERWyjG0XVKKQ9OQ 23:16:56 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.903407513Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:56 kafka | [2024-02-05 23:14:58,494] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:14:58.142+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.904923377Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.513583ms 23:16:56 kafka | [2024-02-05 23:14:58,494] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:14:58.221+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:56 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.910327484Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:56 kafka | [2024-02-05 23:14:58,494] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:14:58.252+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.911612496Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.284683ms 23:16:56 kafka | [2024-02-05 23:14:58,499] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:58.867+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.949263605Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:56 kafka | [2024-02-05 23:14:58,500] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:58.875+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] (Re-)joining group 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.951758262Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.494287ms 23:16:56 kafka | [2024-02-05 23:14:58,500] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:56 policy-pap | [2024-02-05T23:14:58.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Request joining group due to: need to re-join with the given member-id: consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.955808971Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:56 kafka | [2024-02-05 23:14:58,500] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:58.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.95787645Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.060608ms 23:16:56 kafka | [2024-02-05 23:14:58,500] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:56 policy-pap | [2024-02-05T23:14:58.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] (Re-)joining group 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.963963352Z level=info msg="Executing migration" id="Drop public config table" 23:16:56 kafka | [2024-02-05 23:14:58,511] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:58.952+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.965060822Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.095309ms 23:16:56 kafka | [2024-02-05 23:14:58,512] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:58.954+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.968376215Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:56 kafka | [2024-02-05 23:14:58,512] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:14:58.957+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.969440616Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.063991ms 23:16:56 kafka | [2024-02-05 23:14:58,512] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:56 policy-pap | [2024-02-05T23:14:58.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.9740798Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:56 kafka | [2024-02-05 23:14:58,513] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:14:58.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.977155828Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=3.074177ms 23:16:56 kafka | [2024-02-05 23:14:58,519] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:56 policy-pap | [2024-02-05T23:15:01.925+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a', protocol='range'} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.980832153Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:56 kafka | [2024-02-05 23:14:58,520] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:01.935+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Finished assignment for group at generation 1: {consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a=Assignment(partitions=[policy-pdp-pap-0])} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.982390577Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.552633ms 23:16:56 kafka | [2024-02-05 23:14:58,520] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:15:01.957+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a', protocol='range'} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.986477995Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:56 kafka | [2024-02-05 23:14:58,520] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:15:01.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.987622595Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.14438ms 23:16:56 kafka | [2024-02-05 23:14:58,520] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:56 policy-pap | [2024-02-05T23:15:01.961+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7', protocol='range'} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:21.991951148Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:56 kafka | [2024-02-05 23:14:58,535] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:01.962+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7=Assignment(partitions=[policy-pdp-pap-0])} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.024102558Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=32.151571ms 23:16:56 kafka | [2024-02-05 23:14:58,536] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:56 policy-pap | [2024-02-05T23:15:01.965+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Adding newly assigned partitions: policy-pdp-pap-0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.027552024Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:56 kafka | [2024-02-05 23:14:58,536] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:01.967+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7', protocol='range'} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.034514051Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.97296ms 23:16:56 kafka | [2024-02-05 23:14:58,536] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:15:01.968+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.037570269Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:56 kafka | [2024-02-05 23:14:58,536] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:15:01.968+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.04370617Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.13563ms 23:16:56 kafka | [2024-02-05 23:14:58,543] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:56 policy-pap | [2024-02-05T23:15:01.987+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Found no committed offset for partition policy-pdp-pap-0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.047936161Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:56 kafka | [2024-02-05 23:14:58,544] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:01.987+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.048152539Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=216.159µs 23:16:56 kafka | [2024-02-05 23:14:58,544] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:02.006+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-82113737-2238-440a-b31e-67419d0ce49a-3, groupId=82113737-2238-440a-b31e-67419d0ce49a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.050745743Z level=info msg="Executing migration" id="add share column" 23:16:56 kafka | [2024-02-05 23:14:58,544] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:56 policy-pap | [2024-02-05T23:15:02.006+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.056709845Z level=info msg="Migration successfully executed" id="add share column" duration=5.959971ms 23:16:56 kafka | [2024-02-05 23:14:58,544] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:05.122+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:56 kafka | [2024-02-05 23:14:58,551] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.060068501Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:15:05.122+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:16:56 kafka | [2024-02-05 23:14:58,551] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.06028624Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=217.409µs 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:05.129+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 7 ms 23:16:56 kafka | [2024-02-05 23:14:58,551] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.062992579Z level=info msg="Executing migration" id="create file table" 23:16:56 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:56 policy-pap | [2024-02-05T23:15:18.855+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.063716342Z level=info msg="Migration successfully executed" id="create file table" duration=723.653µs 23:16:56 kafka | [2024-02-05 23:14:58,551] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.068875803Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:56 kafka | [2024-02-05 23:14:58,551] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:15:18.855+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.071831018Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.960236ms 23:16:56 kafka | [2024-02-05 23:14:58,559] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce579082-502e-4e58-9380-d7baab3a6748","timestampMs":1707174918820,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.076170694Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:56 kafka | [2024-02-05 23:14:58,559] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:56 policy-pap | [2024-02-05T23:15:18.856+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.077405812Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.234858ms 23:16:56 kafka | [2024-02-05 23:14:58,559] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.080759237Z level=info msg="Executing migration" id="create file_meta table" 23:16:56 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce579082-502e-4e58-9380-d7baab3a6748","timestampMs":1707174918820,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 kafka | [2024-02-05 23:14:58,559] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.081595945Z level=info msg="Migration successfully executed" id="create file_meta table" duration=836.118µs 23:16:56 policy-pap | [2024-02-05T23:15:18.863+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:56 kafka | [2024-02-05 23:14:58,559] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.08495008Z level=info msg="Executing migration" id="file table idx: path key" 23:16:56 policy-pap | [2024-02-05T23:15:18.971+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting 23:16:56 kafka | [2024-02-05 23:14:58,566] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.086227097Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.270295ms 23:16:56 policy-pap | [2024-02-05T23:15:18.971+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting listener 23:16:56 kafka | [2024-02-05 23:14:58,566] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.09174928Z level=info msg="Executing migration" id="set path collation in file table" 23:16:56 policy-pap | [2024-02-05T23:15:18.971+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting timer 23:16:56 kafka | [2024-02-05 23:14:58,567] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.091891692Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=142.412µs 23:16:56 policy-pap | [2024-02-05T23:15:18.972+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=687caf5b-4d92-42de-acdb-f82aab7cc43c, expireMs=1707174948972] 23:16:56 kafka | [2024-02-05 23:14:58,567] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.095807453Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:56 policy-pap | [2024-02-05T23:15:18.974+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting enqueue 23:16:56 kafka | [2024-02-05 23:14:58,567] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.095873947Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=66.945µs 23:16:56 policy-pap | [2024-02-05T23:15:18.974+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=687caf5b-4d92-42de-acdb-f82aab7cc43c, expireMs=1707174948972] 23:16:56 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:56 kafka | [2024-02-05 23:14:58,577] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.099461826Z level=info msg="Executing migration" id="managed permissions migration" 23:16:56 policy-pap | [2024-02-05T23:15:18.974+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate started 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,577] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.100361038Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=898.492µs 23:16:56 policy-pap | [2024-02-05T23:15:18.976+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,577] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.105117178Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"687caf5b-4d92-42de-acdb-f82aab7cc43c","timestampMs":1707174918952,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,577] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.105519628Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=400.02µs 23:16:56 policy-pap | [2024-02-05T23:15:19.013+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:56 kafka | [2024-02-05 23:14:58,577] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.109309751Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"687caf5b-4d92-42de-acdb-f82aab7cc43c","timestampMs":1707174918952,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,584] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.110142069Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=835.888µs 23:16:56 policy-pap | [2024-02-05T23:15:19.013+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:56 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:56 kafka | [2024-02-05 23:14:58,584] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.113400122Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:56 policy-pap | [2024-02-05T23:15:19.022+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:56 policy-db-migrator | JOIN pdpstatistics b 23:16:56 kafka | [2024-02-05 23:14:58,584] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.123834538Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.430086ms 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"687caf5b-4d92-42de-acdb-f82aab7cc43c","timestampMs":1707174918952,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:56 kafka | [2024-02-05 23:14:58,584] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.127380147Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:56 policy-pap | [2024-02-05T23:15:19.022+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:56 policy-db-migrator | SET a.id = b.id 23:16:56 kafka | [2024-02-05 23:14:58,585] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.127608318Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=228.671µs 23:16:56 policy-pap | [2024-02-05T23:15:19.037+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,594] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.130832603Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:56 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"73abfafa-5e87-480f-971c-84c352b572be","timestampMs":1707174919022,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,594] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.132368528Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.535225ms 23:16:56 policy-pap | [2024-02-05T23:15:19.037+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,594] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.136739761Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:56 policy-pap | [2024-02-05T23:15:19.037+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:56 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:56 kafka | [2024-02-05 23:14:58,594] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.137345327Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=605.706µs 23:16:56 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"73abfafa-5e87-480f-971c-84c352b572be","timestampMs":1707174919022,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup"} 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,594] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.140888974Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:56 policy-pap | [2024-02-05T23:15:19.040+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:56 kafka | [2024-02-05 23:14:58,600] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.141232402Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=343.068µs 23:16:56 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"687caf5b-4d92-42de-acdb-f82aab7cc43c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"97dae9b3-163a-482b-805c-f915fcf0db7a","timestampMs":1707174919025,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,601] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.145392137Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:56 policy-pap | [2024-02-05T23:15:19.061+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,601] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.14598287Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=590.673µs 23:16:56 policy-pap | [2024-02-05T23:15:19.063+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping enqueue 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,601] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.149444599Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:56 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:56 kafka | [2024-02-05 23:14:58,601] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.158039682Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.592003ms 23:16:56 policy-pap | [2024-02-05T23:15:19.063+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping timer 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,612] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.163358058Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:56 policy-pap | [2024-02-05T23:15:19.063+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=687caf5b-4d92-42de-acdb-f82aab7cc43c, expireMs=1707174948972] 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:56 kafka | [2024-02-05 23:14:58,612] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.172014755Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.656226ms 23:16:56 policy-pap | [2024-02-05T23:15:19.063+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping listener 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,612] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.175336192Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:56 policy-pap | [2024-02-05T23:15:19.064+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopped 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.176479438Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.143227ms 23:16:56 kafka | [2024-02-05 23:14:58,612] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.065+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.181125524Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:56 kafka | [2024-02-05 23:14:58,612] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"687caf5b-4d92-42de-acdb-f82aab7cc43c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"97dae9b3-163a-482b-805c-f915fcf0db7a","timestampMs":1707174919025,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.29747907Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=116.351766ms 23:16:56 kafka | [2024-02-05 23:14:58,658] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.065+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 687caf5b-4d92-42de-acdb-f82aab7cc43c 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.300983398Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate successful 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.301817736Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=834.447µs 23:16:56 kafka | [2024-02-05 23:14:58,658] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 start publishing next request 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.305243646Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:56 kafka | [2024-02-05 23:14:58,658] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange starting 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.307252708Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.008492ms 23:16:56 kafka | [2024-02-05 23:14:58,659] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange starting listener 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.310936466Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:56 kafka | [2024-02-05 23:14:58,659] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange starting timer 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.344530921Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=33.594875ms 23:16:56 kafka | [2024-02-05 23:14:58,666] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=54fa79c2-9992-4631-b988-2a9cecf2df7f, expireMs=1707174949070] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.348035819Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:56 kafka | [2024-02-05 23:14:58,667] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange starting enqueue 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.34825605Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=222.671µs 23:16:56 kafka | [2024-02-05 23:14:58,667] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange started 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.351205333Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:56 kafka | [2024-02-05 23:14:58,667] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-pap | [2024-02-05T23:15:19.070+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=54fa79c2-9992-4631-b988-2a9cecf2df7f, expireMs=1707174949070] 23:16:56 kafka | [2024-02-05 23:14:58,667] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.351406978Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=201.706µs 23:16:56 policy-pap | [2024-02-05T23:15:19.071+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,674] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.35439459Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:56 kafka | [2024-02-05 23:14:58,674] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"54fa79c2-9992-4631-b988-2a9cecf2df7f","timestampMs":1707174918953,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.354611089Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=216.268µs 23:16:56 kafka | [2024-02-05 23:14:58,674] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.082+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.357672427Z level=info msg="Executing migration" id="create folder table" 23:16:56 kafka | [2024-02-05 23:14:58,674] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"54fa79c2-9992-4631-b988-2a9cecf2df7f","timestampMs":1707174918953,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.359160651Z level=info msg="Migration successfully executed" id="create folder table" duration=1.487884ms 23:16:56 kafka | [2024-02-05 23:14:58,674] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:15:19.082+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.404136495Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:56 kafka | [2024-02-05 23:14:58,680] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.089+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.406149388Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.013133ms 23:16:56 kafka | [2024-02-05 23:14:58,680] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"54fa79c2-9992-4631-b988-2a9cecf2df7f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d0f9af2d-ed17-4886-8d33-16b52a441775","timestampMs":1707174919081,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.411017704Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:56 kafka | [2024-02-05 23:14:58,680] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.090+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 54fa79c2-9992-4631-b988-2a9cecf2df7f 23:16:56 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.412225755Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.214302ms 23:16:56 kafka | [2024-02-05 23:14:58,680] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.102+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.41647137Z level=info msg="Executing migration" id="Update folder title length" 23:16:56 kafka | [2024-02-05 23:14:58,680] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"54fa79c2-9992-4631-b988-2a9cecf2df7f","timestampMs":1707174918953,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.416497116Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.506µs 23:16:56 kafka | [2024-02-05 23:14:58,689] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.102+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.420988595Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:56 kafka | [2024-02-05 23:14:58,689] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:19.106+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.422283857Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.294202ms 23:16:56 kafka | [2024-02-05 23:14:58,689] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:56 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"54fa79c2-9992-4631-b988-2a9cecf2df7f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d0f9af2d-ed17-4886-8d33-16b52a441775","timestampMs":1707174919081,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.426594366Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:56 kafka | [2024-02-05 23:14:58,689] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopping 23:16:56 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.427969236Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.37769ms 23:16:56 kafka | [2024-02-05 23:14:58,689] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopping enqueue 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.431954802Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:56 kafka | [2024-02-05 23:14:58,696] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopping timer 23:16:56 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.434608139Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.652747ms 23:16:56 kafka | [2024-02-05 23:14:58,696] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=54fa79c2-9992-4631-b988-2a9cecf2df7f, expireMs=1707174949070] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.438475229Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:56 kafka | [2024-02-05 23:14:58,696] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopping listener 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.439172355Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=698.747µs 23:16:56 kafka | [2024-02-05 23:14:58,696] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange stopped 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.442416205Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:56 kafka | [2024-02-05 23:14:58,697] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpStateChange successful 23:16:56 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.442718083Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=300.338µs 23:16:56 kafka | [2024-02-05 23:14:58,702] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 start publishing next request 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.448386587Z level=info msg="Executing migration" id="create anon_device table" 23:16:56 kafka | [2024-02-05 23:14:58,702] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting 23:16:56 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.449220476Z level=info msg="Migration successfully executed" id="create anon_device table" duration=834.388µs 23:16:56 kafka | [2024-02-05 23:14:58,703] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting listener 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,703] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.453191558Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.454035557Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=843.869µs 23:16:56 kafka | [2024-02-05 23:14:58,703] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting timer 23:16:56 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.458205006Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:56 kafka | [2024-02-05 23:14:58,709] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=39c1570c-dd05-4983-bede-a1e58213f1cf, expireMs=1707174949107] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.459209411Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.004466ms 23:16:56 kafka | [2024-02-05 23:14:58,709] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate starting enqueue 23:16:56 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.463861878Z level=info msg="Executing migration" id="create signing_key table" 23:16:56 kafka | [2024-02-05 23:14:58,709] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate started 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.464558434Z level=info msg="Migration successfully executed" id="create signing_key table" duration=696.397µs 23:16:56 kafka | [2024-02-05 23:14:58,709] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.107+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:56 kafka | [2024-02-05 23:14:58,709] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39c1570c-dd05-4983-bede-a1e58213f1cf","timestampMs":1707174919093,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.467382479Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:56 kafka | [2024-02-05 23:14:58,717] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.123+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.468310428Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=927.908µs 23:16:56 kafka | [2024-02-05 23:14:58,718] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39c1570c-dd05-4983-bede-a1e58213f1cf","timestampMs":1707174919093,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.471205879Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:56 kafka | [2024-02-05 23:14:58,718] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.123+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.472018272Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=808.993µs 23:16:56 kafka | [2024-02-05 23:14:58,718] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.127+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:56 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.477869958Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:56 kafka | [2024-02-05 23:14:58,718] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39c1570c-dd05-4983-bede-a1e58213f1cf","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"596c9de2-f9f1-49de-b6fb-fa0fd5c472b5","timestampMs":1707174919116,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.478090058Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=220.779µs 23:16:56 kafka | [2024-02-05 23:14:58,726] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.127+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.480987019Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:56 kafka | [2024-02-05 23:14:58,726] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping enqueue 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.487689586Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.702597ms 23:16:56 kafka | [2024-02-05 23:14:58,726] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping timer 23:16:56 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.490752545Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:56 kafka | [2024-02-05 23:14:58,727] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=39c1570c-dd05-4983-bede-a1e58213f1cf, expireMs=1707174949107] 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.491373485Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=615.55µs 23:16:56 kafka | [2024-02-05 23:14:58,727] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopping listener 23:16:56 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.496266975Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:56 kafka | [2024-02-05 23:14:58,739] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:19.128+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate stopped 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.497484559Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.217424ms 23:16:56 kafka | [2024-02-05 23:14:58,741] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:19.129+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.500499397Z level=info msg="Executing migration" id="create sso_setting table" 23:16:56 kafka | [2024-02-05 23:14:58,741] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:56 policy-pap | {"source":"pap-11f9574e-3421-4814-b2a4-afcec4d48235","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39c1570c-dd05-4983-bede-a1e58213f1cf","timestampMs":1707174919093,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.501453912Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=954.264µs 23:16:56 kafka | [2024-02-05 23:14:58,741] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:56 policy-pap | [2024-02-05T23:15:19.129+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.505865344Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:56 kafka | [2024-02-05 23:14:58,741] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:15:19.131+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.506847185Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=982.82µs 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,748] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39c1570c-dd05-4983-bede-a1e58213f1cf","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"596c9de2-f9f1-49de-b6fb-fa0fd5c472b5","timestampMs":1707174919116,"name":"apex-4ce27c51-fddd-4fb2-8599-201068c664c5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.511264749Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,748] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:19.131+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 39c1570c-dd05-4983-bede-a1e58213f1cf 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.511523797Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=259.388µs 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,748] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.133+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 PdpUpdate successful 23:16:56 grafana | logger=migrator t=2024-02-05T23:14:22.514682047Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.107196231s 23:16:56 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:56 kafka | [2024-02-05 23:14:58,748] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:19.133+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4ce27c51-fddd-4fb2-8599-201068c664c5 has no more requests 23:16:56 grafana | logger=sqlstore t=2024-02-05T23:14:22.533242131Z level=info msg="Created default admin" user=admin 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,748] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-pap | [2024-02-05T23:15:25.712+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:56 grafana | logger=sqlstore t=2024-02-05T23:14:22.533622136Z level=info msg="Created default organization" 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,754] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:25.719+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:56 grafana | logger=secrets t=2024-02-05T23:14:22.545078043Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,755] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:26.129+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 23:16:56 grafana | logger=plugin.store t=2024-02-05T23:14:22.561377648Z level=info msg="Loading plugins..." 23:16:56 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:56 kafka | [2024-02-05 23:14:58,755] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:26.665+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=local.finder t=2024-02-05T23:14:22.597076456Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:56 kafka | [2024-02-05 23:14:58,755] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:26.665+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 23:16:56 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:56 grafana | logger=plugin.store t=2024-02-05T23:14:22.597132629Z level=info msg="Plugins loaded" count=55 duration=35.756221ms 23:16:56 kafka | [2024-02-05 23:14:58,755] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=query_data t=2024-02-05T23:14:22.599222819Z level=info msg="Query Service initialization" 23:16:56 kafka | [2024-02-05 23:14:58,766] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-pap | [2024-02-05T23:15:27.185+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group testGroup 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=live.push_http t=2024-02-05T23:14:22.602982634Z level=info msg="Live Push Gateway initialization" 23:16:56 kafka | [2024-02-05 23:14:58,767] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-pap | [2024-02-05T23:15:27.405+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=ngalert.migration t=2024-02-05T23:14:22.609985149Z level=info msg=Starting 23:16:56 kafka | [2024-02-05 23:14:58,767] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:27.498+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:56 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:56 grafana | logger=ngalert.migration orgID=1 t=2024-02-05T23:14:22.611741325Z level=info msg="Migrating alerts for organisation" 23:16:56 kafka | [2024-02-05 23:14:58,767] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-pap | [2024-02-05T23:15:27.498+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group testGroup 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=ngalert.migration orgID=1 t=2024-02-05T23:14:22.613628298Z level=info msg="Alerts found to migrate" alerts=0 23:16:56 policy-pap | [2024-02-05T23:15:27.499+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,767] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-05T23:14:22.617510742Z level=info msg="Completed legacy migration" 23:16:56 policy-pap | [2024-02-05T23:15:27.512+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-05T23:15:27Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-05T23:15:27Z, user=policyadmin)] 23:16:56 kafka | [2024-02-05 23:14:58,774] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=infra.usagestats.collector t=2024-02-05T23:14:22.654013621Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:56 policy-pap | [2024-02-05T23:15:28.202+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,775] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:56 grafana | logger=provisioning.datasources t=2024-02-05T23:14:22.65653999Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:56 policy-pap | [2024-02-05T23:15:28.203+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:56 kafka | [2024-02-05 23:14:58,775] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=provisioning.alerting t=2024-02-05T23:14:22.669967549Z level=info msg="starting to provision alerting" 23:16:56 policy-pap | [2024-02-05T23:15:28.203+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:56 kafka | [2024-02-05 23:14:58,775] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:56 grafana | logger=provisioning.alerting t=2024-02-05T23:14:22.669986673Z level=info msg="finished to provision alerting" 23:16:56 policy-pap | [2024-02-05T23:15:28.203+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,775] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=grafanaStorageLogger t=2024-02-05T23:14:22.670319338Z level=info msg="Storage starting" 23:16:56 policy-pap | [2024-02-05T23:15:28.203+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,781] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=ngalert.state.manager t=2024-02-05T23:14:22.670456168Z level=info msg="Warming state cache for startup" 23:16:56 policy-pap | [2024-02-05T23:15:28.215+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-05T23:15:28Z, user=policyadmin)] 23:16:56 kafka | [2024-02-05 23:14:58,782] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=ngalert.multiorg.alertmanager t=2024-02-05T23:14:22.670920184Z level=info msg="Starting MultiOrg Alertmanager" 23:16:56 policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 23:16:56 kafka | [2024-02-05 23:14:58,782] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:56 grafana | logger=ngalert.state.manager t=2024-02-05T23:14:22.670988079Z level=info msg="State cache has been initialized" states=0 duration=531.12µs 23:16:56 policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,782] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 grafana | logger=ngalert.scheduler t=2024-02-05T23:14:22.671024037Z level=info msg="Starting scheduler" tickInterval=10s 23:16:56 policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:56 kafka | [2024-02-05 23:14:58,782] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=ticker t=2024-02-05T23:14:22.671101174Z level=info msg=starting first_tick=2024-02-05T23:14:30Z 23:16:56 policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:56 kafka | [2024-02-05 23:14:58,796] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=http.server t=2024-02-05T23:14:22.682082384Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:56 policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,796] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=grafana-apiserver t=2024-02-05T23:14:22.686502057Z level=info msg="Authentication is disabled" 23:16:56 policy-pap | [2024-02-05T23:15:28.523+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,797] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:56 grafana | logger=grafana-apiserver t=2024-02-05T23:14:22.708935062Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:56 policy-pap | [2024-02-05T23:15:28.533+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-05T23:15:28Z, user=policyadmin)] 23:16:56 kafka | [2024-02-05 23:14:58,797] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=grafana.update.checker t=2024-02-05T23:14:22.721740992Z level=info msg="Update check succeeded" duration=51.353229ms 23:16:56 policy-pap | [2024-02-05T23:15:48.972+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=687caf5b-4d92-42de-acdb-f82aab7cc43c, expireMs=1707174948972] 23:16:56 kafka | [2024-02-05 23:14:58,797] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:56 grafana | logger=sqlstore.transactions t=2024-02-05T23:14:22.755716663Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:56 policy-pap | [2024-02-05T23:15:49.071+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=54fa79c2-9992-4631-b988-2a9cecf2df7f, expireMs=1707174949070] 23:16:56 kafka | [2024-02-05 23:14:58,804] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | -------------- 23:16:56 grafana | logger=plugins.update.checker t=2024-02-05T23:14:22.76650818Z level=info msg="Update check succeeded" duration=95.51499ms 23:16:56 policy-pap | [2024-02-05T23:15:49.078+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,805] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | 23:16:56 grafana | logger=infra.usagestats t=2024-02-05T23:16:02.683692382Z level=info msg="Usage stats are ready to report" 23:16:56 policy-pap | [2024-02-05T23:15:49.080+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:56 kafka | [2024-02-05 23:14:58,805] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,805] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:56 kafka | [2024-02-05 23:14:58,805] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,818] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:56 kafka | [2024-02-05 23:14:58,819] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,819] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,819] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | msg 23:16:56 kafka | [2024-02-05 23:14:58,820] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | upgrade to 1100 completed 23:16:56 kafka | [2024-02-05 23:14:58,829] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,829] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:56 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:56 kafka | [2024-02-05 23:14:58,829] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,829] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:56 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:56 kafka | [2024-02-05 23:14:58,829] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(csBd2HU8Tmiot-5BjYrBHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:56 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:56 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:56 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:56 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:56 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:56 policy-db-migrator | TRUNCATE TABLE sequence 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:56 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:56 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:56 policy-db-migrator | -------------- 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | DROP TABLE pdpstatistics 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | DROP TABLE statistics_sequence 23:16:56 policy-db-migrator | -------------- 23:16:56 policy-db-migrator | 23:16:56 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:56 policy-db-migrator | name version 23:16:56 policy-db-migrator | policyadmin 1300 23:16:56 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:56 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:26 23:16:56 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:27 23:16:56 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:28 23:16:56 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:29 23:16:56 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0502242314260800u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:30 23:16:56 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0502242314260900u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0502242314261000u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0502242314261100u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0502242314261200u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0502242314261200u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0502242314261200u 1 2024-02-05 23:14:31 23:16:56 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0502242314261200u 1 2024-02-05 23:14:32 23:16:56 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0502242314261300u 1 2024-02-05 23:14:32 23:16:56 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0502242314261300u 1 2024-02-05 23:14:32 23:16:56 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0502242314261300u 1 2024-02-05 23:14:32 23:16:56 policy-db-migrator | policyadmin: OK @ 1300 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,833] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,838] INFO [Broker id=1] Finished LeaderAndIsr request in 526ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,849] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=csBd2HU8Tmiot-5BjYrBHg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,850] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,850] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,858] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 21 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,859] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,859] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,860] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,860] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,859] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,860] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,860] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 24 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,861] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,862] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,863] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:56 kafka | [2024-02-05 23:14:58,894] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 82113737-2238-440a-b31e-67419d0ce49a in Empty state. Created a new member id consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,908] INFO [GroupCoordinator 1]: Preparing to rebalance group 82113737-2238-440a-b31e-67419d0ce49a in state PreparingRebalance with old generation 0 (__consumer_offsets-32) (reason: Adding new member consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,956] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:58,960] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:59,179] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 447a3058-d755-46ac-8e2e-59b142489c6a in Empty state. Created a new member id consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:14:59,182] INFO [GroupCoordinator 1]: Preparing to rebalance group 447a3058-d755-46ac-8e2e-59b142489c6a in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:15:01,922] INFO [GroupCoordinator 1]: Stabilized group 82113737-2238-440a-b31e-67419d0ce49a generation 1 (__consumer_offsets-32) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:15:01,946] INFO [GroupCoordinator 1]: Assignment received from leader consumer-82113737-2238-440a-b31e-67419d0ce49a-3-afbe5a78-14ce-4b81-b5b9-c4d8b181932a for group 82113737-2238-440a-b31e-67419d0ce49a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:15:01,960] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:15:01,964] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-88747db9-724e-4f6e-b0e4-68a0c0a578e7 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:15:02,183] INFO [GroupCoordinator 1]: Stabilized group 447a3058-d755-46ac-8e2e-59b142489c6a generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:56 kafka | [2024-02-05 23:15:02,203] INFO [GroupCoordinator 1]: Assignment received from leader consumer-447a3058-d755-46ac-8e2e-59b142489c6a-2-0393cca8-4f7c-4ac1-8be7-b086b667694d for group 447a3058-d755-46ac-8e2e-59b142489c6a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:56 ++ echo 'Tearing down containers...' 23:16:56 Tearing down containers... 23:16:56 ++ docker-compose down -v --remove-orphans 23:16:56 Stopping policy-apex-pdp ... 23:16:56 Stopping policy-pap ... 23:16:56 Stopping policy-api ... 23:16:56 Stopping grafana ... 23:16:56 Stopping kafka ... 23:16:56 Stopping simulator ... 23:16:56 Stopping prometheus ... 23:16:56 Stopping compose_zookeeper_1 ... 23:16:56 Stopping mariadb ... 23:16:57 Stopping grafana ... done 23:16:57 Stopping prometheus ... done 23:17:07 Stopping policy-apex-pdp ... done 23:17:17 Stopping simulator ... done 23:17:17 Stopping policy-pap ... done 23:17:18 Stopping mariadb ... done 23:17:18 Stopping kafka ... done 23:17:19 Stopping compose_zookeeper_1 ... done 23:17:27 Stopping policy-api ... done 23:17:28 Removing policy-apex-pdp ... 23:17:28 Removing policy-pap ... 23:17:28 Removing policy-api ... 23:17:28 Removing policy-db-migrator ... 23:17:28 Removing grafana ... 23:17:28 Removing kafka ... 23:17:28 Removing simulator ... 23:17:28 Removing prometheus ... 23:17:28 Removing compose_zookeeper_1 ... 23:17:28 Removing mariadb ... 23:17:28 Removing simulator ... done 23:17:28 Removing prometheus ... done 23:17:28 Removing grafana ... done 23:17:28 Removing compose_zookeeper_1 ... done 23:17:28 Removing policy-apex-pdp ... done 23:17:28 Removing policy-api ... done 23:17:28 Removing policy-db-migrator ... done 23:17:28 Removing mariadb ... done 23:17:28 Removing policy-pap ... done 23:17:28 Removing kafka ... done 23:17:28 Removing network compose_default 23:17:28 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:28 + load_set 23:17:28 + _setopts=hxB 23:17:28 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:28 ++ tr : ' ' 23:17:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:28 + set +o braceexpand 23:17:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:28 + set +o hashall 23:17:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:28 + set +o interactive-comments 23:17:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:28 + set +o xtrace 23:17:28 ++ echo hxB 23:17:28 ++ sed 's/./& /g' 23:17:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:28 + set +h 23:17:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:28 + set +x 23:17:28 + [[ -n /tmp/tmp.Hjz3EwQKXg ]] 23:17:28 + rsync -av /tmp/tmp.Hjz3EwQKXg/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:28 sending incremental file list 23:17:28 ./ 23:17:28 log.html 23:17:28 output.xml 23:17:28 report.html 23:17:28 testplan.txt 23:17:28 23:17:28 sent 910,202 bytes received 95 bytes 1,820,594.00 bytes/sec 23:17:28 total size is 909,656 speedup is 1.00 23:17:28 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:28 + exit 0 23:17:28 $ ssh-agent -k 23:17:28 unset SSH_AUTH_SOCK; 23:17:28 unset SSH_AGENT_PID; 23:17:28 echo Agent pid 2118 killed; 23:17:28 [ssh-agent] Stopped. 23:17:28 Robot results publisher started... 23:17:28 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:28 -Parsing output xml: 23:17:28 Done! 23:17:28 WARNING! Could not find file: **/log.html 23:17:28 WARNING! Could not find file: **/report.html 23:17:28 -Copying log files to build dir: 23:17:29 Done! 23:17:29 -Assigning results to build: 23:17:29 Done! 23:17:29 -Checking thresholds: 23:17:29 Done! 23:17:29 Done publishing Robot results. 23:17:29 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:29 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12425816479566449737.sh 23:17:29 ---> sysstat.sh 23:17:29 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4593238408155188927.sh 23:17:29 ---> package-listing.sh 23:17:29 ++ facter osfamily 23:17:29 ++ tr '[:upper:]' '[:lower:]' 23:17:29 + OS_FAMILY=debian 23:17:29 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:29 + START_PACKAGES=/tmp/packages_start.txt 23:17:29 + END_PACKAGES=/tmp/packages_end.txt 23:17:29 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:29 + PACKAGES=/tmp/packages_start.txt 23:17:29 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:29 + PACKAGES=/tmp/packages_end.txt 23:17:29 + case "${OS_FAMILY}" in 23:17:29 + dpkg -l 23:17:29 + grep '^ii' 23:17:29 + '[' -f /tmp/packages_start.txt ']' 23:17:29 + '[' -f /tmp/packages_end.txt ']' 23:17:29 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:29 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:29 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:29 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:29 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6683213572039224443.sh 23:17:29 ---> capture-instance-metadata.sh 23:17:29 Setup pyenv: 23:17:29 system 23:17:29 3.8.13 23:17:29 3.9.13 23:17:29 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:29 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NYvV from file:/tmp/.os_lf_venv 23:17:31 lf-activate-venv(): INFO: Installing: lftools 23:17:41 lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH 23:17:41 INFO: Running in OpenStack, capturing instance metadata 23:17:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7532453900968516086.sh 23:17:42 provisioning config files... 23:17:42 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config8894190360013508123tmp 23:17:42 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:42 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:42 [EnvInject] - Injecting environment variables from a build step. 23:17:42 [EnvInject] - Injecting as environment variables the properties content 23:17:42 SERVER_ID=logs 23:17:42 23:17:42 [EnvInject] - Variables injected successfully. 23:17:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8426494187122072295.sh 23:17:42 ---> create-netrc.sh 23:17:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17052616562774512711.sh 23:17:42 ---> python-tools-install.sh 23:17:42 Setup pyenv: 23:17:42 system 23:17:42 3.8.13 23:17:42 3.9.13 23:17:42 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:42 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NYvV from file:/tmp/.os_lf_venv 23:17:43 lf-activate-venv(): INFO: Installing: lftools 23:17:51 lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH 23:17:51 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2464292623418287803.sh 23:17:51 ---> sudo-logs.sh 23:17:51 Archiving 'sudo' log.. 23:17:51 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5702974967367551005.sh 23:17:51 ---> job-cost.sh 23:17:51 Setup pyenv: 23:17:51 system 23:17:51 3.8.13 23:17:51 3.9.13 23:17:51 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:52 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NYvV from file:/tmp/.os_lf_venv 23:17:53 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:18:00 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:18:00 lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. 23:18:00 lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH 23:18:00 INFO: No Stack... 23:18:00 INFO: Retrieving Pricing Info for: v3-standard-8 23:18:01 INFO: Archiving Costs 23:18:01 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins8577616985268140686.sh 23:18:01 ---> logs-deploy.sh 23:18:01 Setup pyenv: 23:18:01 system 23:18:01 3.8.13 23:18:01 3.9.13 23:18:01 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:01 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NYvV from file:/tmp/.os_lf_venv 23:18:02 lf-activate-venv(): INFO: Installing: lftools 23:18:11 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:18:11 python-openstackclient 6.5.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. 23:18:11 lf-activate-venv(): INFO: Adding /tmp/venv-NYvV/bin to PATH 23:18:11 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1562 23:18:11 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:12 Archives upload complete. 23:18:13 INFO: archiving logs to Nexus 23:18:13 ---> uname -a: 23:18:13 Linux prd-ubuntu1804-docker-8c-8g-2890 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:13 23:18:13 23:18:13 ---> lscpu: 23:18:13 Architecture: x86_64 23:18:13 CPU op-mode(s): 32-bit, 64-bit 23:18:13 Byte Order: Little Endian 23:18:13 CPU(s): 8 23:18:13 On-line CPU(s) list: 0-7 23:18:13 Thread(s) per core: 1 23:18:13 Core(s) per socket: 1 23:18:13 Socket(s): 8 23:18:13 NUMA node(s): 1 23:18:13 Vendor ID: AuthenticAMD 23:18:13 CPU family: 23 23:18:13 Model: 49 23:18:13 Model name: AMD EPYC-Rome Processor 23:18:13 Stepping: 0 23:18:13 CPU MHz: 2799.998 23:18:13 BogoMIPS: 5599.99 23:18:13 Virtualization: AMD-V 23:18:13 Hypervisor vendor: KVM 23:18:13 Virtualization type: full 23:18:13 L1d cache: 32K 23:18:13 L1i cache: 32K 23:18:13 L2 cache: 512K 23:18:13 L3 cache: 16384K 23:18:13 NUMA node0 CPU(s): 0-7 23:18:13 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:13 23:18:13 23:18:13 ---> nproc: 23:18:13 8 23:18:13 23:18:13 23:18:13 ---> df -h: 23:18:13 Filesystem Size Used Avail Use% Mounted on 23:18:13 udev 16G 0 16G 0% /dev 23:18:13 tmpfs 3.2G 708K 3.2G 1% /run 23:18:13 /dev/vda1 155G 15G 141G 10% / 23:18:13 tmpfs 16G 0 16G 0% /dev/shm 23:18:13 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:13 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:13 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:13 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:13 23:18:13 23:18:13 ---> free -m: 23:18:13 total used free shared buff/cache available 23:18:13 Mem: 32167 850 24624 0 6692 30860 23:18:13 Swap: 1023 0 1023 23:18:13 23:18:13 23:18:13 ---> ip addr: 23:18:13 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:13 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:13 inet 127.0.0.1/8 scope host lo 23:18:13 valid_lft forever preferred_lft forever 23:18:13 inet6 ::1/128 scope host 23:18:13 valid_lft forever preferred_lft forever 23:18:13 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:13 link/ether fa:16:3e:80:f1:a3 brd ff:ff:ff:ff:ff:ff 23:18:13 inet 10.30.107.11/23 brd 10.30.107.255 scope global dynamic ens3 23:18:13 valid_lft 85929sec preferred_lft 85929sec 23:18:13 inet6 fe80::f816:3eff:fe80:f1a3/64 scope link 23:18:13 valid_lft forever preferred_lft forever 23:18:13 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:13 link/ether 02:42:64:91:67:cd brd ff:ff:ff:ff:ff:ff 23:18:13 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:13 valid_lft forever preferred_lft forever 23:18:13 23:18:13 23:18:13 ---> sar -b -r -n DEV: 23:18:13 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-2890) 02/05/24 _x86_64_ (8 CPU) 23:18:13 23:18:13 23:10:26 LINUX RESTART (8 CPU) 23:18:13 23:18:13 23:11:01 tps rtps wtps bread/s bwrtn/s 23:18:13 23:12:01 109.07 36.49 72.57 1699.98 24459.39 23:18:13 23:13:01 127.31 23.06 104.25 2766.34 30184.17 23:18:13 23:14:01 210.23 0.18 210.05 20.66 115451.29 23:18:13 23:15:01 360.86 12.68 348.18 810.30 76792.28 23:18:13 23:16:01 17.98 0.07 17.91 3.07 18053.29 23:18:13 23:17:01 23.50 0.10 23.40 15.06 19102.60 23:18:13 23:18:01 80.99 1.95 79.04 112.11 21355.47 23:18:13 Average: 132.85 10.65 122.20 775.36 43628.36 23:18:13 23:18:13 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:13 23:12:01 30101604 31715608 2837616 8.61 69980 1853844 1401444 4.12 857392 1689480 188036 23:18:13 23:13:01 28968892 31657132 3970328 12.05 98528 2863992 1581924 4.65 1000000 2603596 839656 23:18:13 23:14:01 25208284 31643476 7730936 23.47 142188 6406236 1588644 4.67 1049880 6142916 1554048 23:18:13 23:15:01 23077844 29631520 9861376 29.94 158120 6486928 9000532 26.48 3233736 6007984 1372 23:18:13 23:16:01 23029608 29584476 9909612 30.08 158300 6487480 8838720 26.01 3284564 6004508 272 23:18:13 23:17:01 23071980 29653684 9867240 29.96 158684 6515856 8096960 23.82 3231740 6019036 196 23:18:13 23:18:01 25278060 31662748 7661160 23.26 162404 6331184 1482948 4.36 1236320 5866040 31676 23:18:13 Average: 25533753 30792663 7405467 22.48 135458 5277931 4570167 13.45 1984805 4904794 373608 23:18:13 23:18:13 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:13 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:12:01 ens3 104.32 72.34 1072.47 15.28 0.00 0.00 0.00 0.00 23:18:13 23:12:01 lo 1.73 1.73 0.18 0.18 0.00 0.00 0.00 0.00 23:18:13 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:13:01 br-616c88bdd522 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:13:01 ens3 168.69 114.13 4266.81 12.85 0.00 0.00 0.00 0.00 23:18:13 23:13:01 lo 6.33 6.33 0.59 0.59 0.00 0.00 0.00 0.00 23:18:13 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:14:01 br-616c88bdd522 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:14:01 ens3 1173.42 627.46 31165.96 46.26 0.00 0.00 0.00 0.00 23:18:13 23:14:01 lo 7.47 7.47 0.74 0.74 0.00 0.00 0.00 0.00 23:18:13 23:15:01 veth451b918 27.00 25.15 11.56 16.27 0.00 0.00 0.00 0.00 23:18:13 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:15:01 veth87408d9 0.15 0.50 0.01 0.03 0.00 0.00 0.00 0.00 23:18:13 23:15:01 br-616c88bdd522 0.90 0.80 0.07 0.32 0.00 0.00 0.00 0.00 23:18:13 23:16:01 veth451b918 26.51 22.30 8.44 24.17 0.00 0.00 0.00 0.00 23:18:13 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:16:01 veth87408d9 0.48 0.47 0.05 1.48 0.00 0.00 0.00 0.00 23:18:13 23:16:01 br-616c88bdd522 2.10 2.38 1.81 1.73 0.00 0.00 0.00 0.00 23:18:13 23:17:01 veth451b918 0.40 0.47 0.59 0.03 0.00 0.00 0.00 0.00 23:18:13 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:17:01 br-616c88bdd522 1.17 1.48 0.10 0.14 0.00 0.00 0.00 0.00 23:18:13 23:17:01 veth13c067f 0.00 0.38 0.00 0.02 0.00 0.00 0.00 0.00 23:18:13 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 23:18:01 ens3 1869.92 1086.12 37338.04 158.32 0.00 0.00 0.00 0.00 23:18:13 23:18:01 lo 35.49 35.49 6.25 6.25 0.00 0.00 0.00 0.00 23:18:13 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:13 Average: ens3 220.51 125.77 5233.66 15.87 0.00 0.00 0.00 0.00 23:18:13 Average: lo 4.52 4.52 0.85 0.85 0.00 0.00 0.00 0.00 23:18:13 23:18:13 23:18:13 ---> sar -P ALL: 23:18:13 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-2890) 02/05/24 _x86_64_ (8 CPU) 23:18:13 23:18:13 23:10:26 LINUX RESTART (8 CPU) 23:18:13 23:18:13 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:18:13 23:12:01 all 10.53 0.00 0.87 1.83 0.03 86.74 23:18:13 23:12:01 0 1.08 0.00 0.62 6.16 0.02 92.12 23:18:13 23:12:01 1 9.78 0.00 0.82 0.70 0.08 88.62 23:18:13 23:12:01 2 33.78 0.00 2.26 6.62 0.07 57.28 23:18:13 23:12:01 3 26.00 0.00 1.39 0.37 0.05 72.20 23:18:13 23:12:01 4 4.99 0.00 0.58 0.05 0.02 94.36 23:18:13 23:12:01 5 4.36 0.00 0.50 0.30 0.02 94.83 23:18:13 23:12:01 6 1.18 0.00 0.38 0.07 0.02 98.35 23:18:13 23:12:01 7 3.17 0.00 0.40 0.35 0.00 96.08 23:18:13 23:13:01 all 11.10 0.00 1.72 2.30 0.04 84.84 23:18:13 23:13:01 0 4.75 0.00 1.34 0.75 0.03 93.12 23:18:13 23:13:01 1 12.11 0.00 2.02 1.04 0.07 84.77 23:18:13 23:13:01 2 4.45 0.00 1.07 2.66 0.05 91.76 23:18:13 23:13:01 3 8.34 0.00 1.36 0.25 0.03 90.01 23:18:13 23:13:01 4 9.38 0.00 1.64 0.07 0.03 88.88 23:18:13 23:13:01 5 2.46 0.00 1.57 11.09 0.02 84.87 23:18:13 23:13:01 6 25.90 0.00 2.68 2.12 0.05 69.25 23:18:13 23:13:01 7 21.29 0.00 2.06 0.49 0.05 76.11 23:18:13 23:14:01 all 11.74 0.00 5.26 8.15 0.07 74.78 23:18:13 23:14:01 0 13.56 0.00 5.50 4.72 0.08 76.13 23:18:13 23:14:01 1 12.92 0.00 5.97 0.73 0.07 80.31 23:18:13 23:14:01 2 9.33 0.00 6.56 0.36 0.08 83.67 23:18:13 23:14:01 3 11.61 0.00 3.03 3.10 0.07 82.18 23:18:13 23:14:01 4 12.30 0.00 5.27 0.81 0.05 81.56 23:18:13 23:14:01 5 9.67 0.00 5.23 23.61 0.07 61.42 23:18:13 23:14:01 6 13.88 0.00 4.87 12.04 0.09 69.13 23:18:13 23:14:01 7 10.60 0.00 5.61 19.98 0.09 63.72 23:18:13 23:15:01 all 25.50 0.00 3.63 4.76 0.09 66.02 23:18:13 23:15:01 0 28.95 0.00 3.77 1.07 0.10 66.11 23:18:13 23:15:01 1 27.60 0.00 3.89 1.46 0.08 66.97 23:18:13 23:15:01 2 23.85 0.00 3.46 5.06 0.07 67.57 23:18:13 23:15:01 3 21.86 0.00 3.02 0.27 0.07 74.78 23:18:13 23:15:01 4 28.16 0.00 3.97 1.04 0.10 66.73 23:18:13 23:15:01 5 26.07 0.00 3.32 2.71 0.07 67.83 23:18:13 23:15:01 6 22.67 0.00 3.87 20.87 0.10 52.49 23:18:13 23:15:01 7 24.83 0.00 3.76 5.63 0.08 65.70 23:18:13 23:16:01 all 6.75 0.00 0.56 0.98 0.06 91.65 23:18:13 23:16:01 0 7.87 0.00 0.58 0.00 0.05 91.50 23:18:13 23:16:01 1 6.23 0.00 0.63 0.22 0.08 92.84 23:18:13 23:16:01 2 7.57 0.00 0.55 7.44 0.07 84.37 23:18:13 23:16:01 3 6.99 0.00 0.60 0.00 0.07 92.35 23:18:13 23:16:01 4 7.96 0.00 0.67 0.00 0.03 91.34 23:18:13 23:16:01 5 5.17 0.00 0.50 0.07 0.07 94.19 23:18:13 23:16:01 6 7.10 0.00 0.50 0.00 0.05 92.35 23:18:13 23:16:01 7 5.13 0.00 0.47 0.12 0.08 94.20 23:18:13 23:17:01 all 1.32 0.00 0.34 1.00 0.05 97.29 23:18:13 23:17:01 0 1.29 0.00 0.40 0.00 0.05 98.26 23:18:13 23:17:01 1 0.84 0.00 0.33 0.18 0.05 98.60 23:18:13 23:17:01 2 1.10 0.00 0.33 7.28 0.05 91.23 23:18:13 23:17:01 3 2.32 0.00 0.42 0.00 0.05 97.21 23:18:13 23:17:01 4 1.10 0.00 0.27 0.00 0.05 98.58 23:18:13 23:17:01 5 0.94 0.00 0.35 0.39 0.07 98.26 23:18:13 23:17:01 6 1.85 0.00 0.30 0.12 0.03 97.70 23:18:13 23:17:01 7 1.09 0.00 0.30 0.02 0.05 98.55 23:18:13 23:18:01 all 6.33 0.00 0.66 1.39 0.04 91.58 23:18:13 23:18:01 0 1.02 0.00 0.50 0.28 0.03 98.16 23:18:13 23:18:01 1 2.78 0.00 0.70 1.33 0.02 95.16 23:18:13 23:18:01 2 1.04 0.00 0.48 8.08 0.02 90.39 23:18:13 23:18:01 3 0.97 0.00 0.43 0.50 0.03 98.06 23:18:13 23:18:01 4 18.01 0.00 1.00 0.33 0.05 80.61 23:18:13 23:18:01 5 10.04 0.00 0.55 0.32 0.03 89.06 23:18:13 23:18:01 6 5.45 0.00 0.58 0.05 0.02 93.90 23:18:13 23:18:01 7 11.40 0.00 0.94 0.25 0.07 87.35 23:18:13 Average: all 10.45 0.00 1.85 2.90 0.05 84.74 23:18:13 Average: 0 8.33 0.00 1.81 1.85 0.05 87.96 23:18:13 Average: 1 10.31 0.00 2.05 0.81 0.06 86.77 23:18:13 Average: 2 11.59 0.00 2.09 5.37 0.06 80.90 23:18:13 Average: 3 11.15 0.00 1.46 0.64 0.05 86.70 23:18:13 Average: 4 11.68 0.00 1.90 0.33 0.05 86.04 23:18:13 Average: 5 8.36 0.00 1.71 5.45 0.05 84.43 23:18:13 Average: 6 11.12 0.00 1.87 5.00 0.05 81.96 23:18:13 Average: 7 11.06 0.00 1.92 3.79 0.06 83.18 23:18:13 23:18:13 23:18:13