23:10:53 Started by timer 23:10:53 Running as SYSTEM 23:10:53 [EnvInject] - Loading node environment variables. 23:10:53 Building remotely on prd-ubuntu1804-docker-8c-8g-14039 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:53 [ssh-agent] Looking for ssh-agent implementation... 23:10:53 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:53 $ ssh-agent 23:10:53 SSH_AUTH_SOCK=/tmp/ssh-735nlZoSfOFa/agent.2081 23:10:53 SSH_AGENT_PID=2083 23:10:53 [ssh-agent] Started. 23:10:53 Running ssh-add (command line suppressed) 23:10:53 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_10773109421589806918.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_10773109421589806918.key) 23:10:53 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:53 The recommended git tool is: NONE 23:10:55 using credential onap-jenkins-ssh 23:10:55 Wiping out workspace first. 23:10:55 Cloning the remote Git repository 23:10:55 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:10:55 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:10:55 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:10:55 > git --version # timeout=10 23:10:55 > git --version # 'git version 2.17.1' 23:10:55 using GIT_SSH to set credentials Gerrit user 23:10:55 Verifying host key using manually-configured host key entries 23:10:55 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:10:55 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:10:55 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:10:56 Avoid second fetch 23:10:56 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:10:56 Checking out Revision caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 (refs/remotes/origin/master) 23:10:56 > git config core.sparsecheckout # timeout=10 23:10:56 > git checkout -f caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=30 23:10:56 Commit message: "Remove Dmaap configurations from CSITs" 23:10:56 > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 23:10:56 provisioning config files... 23:10:56 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:10:56 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:10:56 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10217993218259060493.sh 23:10:56 ---> python-tools-install.sh 23:10:56 Setup pyenv: 23:10:56 * system (set by /opt/pyenv/version) 23:10:56 * 3.8.13 (set by /opt/pyenv/version) 23:10:56 * 3.9.13 (set by /opt/pyenv/version) 23:10:56 * 3.10.6 (set by /opt/pyenv/version) 23:11:00 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-Zh8L 23:11:00 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:04 lf-activate-venv(): INFO: Installing: lftools 23:11:35 lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH 23:11:35 Generating Requirements File 23:12:06 Python 3.10.6 23:12:06 pip 23.3.2 from /tmp/venv-Zh8L/lib/python3.10/site-packages/pip (python 3.10) 23:12:06 appdirs==1.4.4 23:12:06 argcomplete==3.2.1 23:12:06 aspy.yaml==1.3.0 23:12:06 attrs==23.2.0 23:12:06 autopage==0.5.2 23:12:06 beautifulsoup4==4.12.3 23:12:06 boto3==1.34.23 23:12:06 botocore==1.34.23 23:12:06 bs4==0.0.2 23:12:06 cachetools==5.3.2 23:12:06 certifi==2023.11.17 23:12:06 cffi==1.16.0 23:12:06 cfgv==3.4.0 23:12:06 chardet==5.2.0 23:12:06 charset-normalizer==3.3.2 23:12:06 click==8.1.7 23:12:06 cliff==4.5.0 23:12:06 cmd2==2.4.3 23:12:06 cryptography==3.3.2 23:12:06 debtcollector==2.5.0 23:12:06 decorator==5.1.1 23:12:06 defusedxml==0.7.1 23:12:06 Deprecated==1.2.14 23:12:06 distlib==0.3.8 23:12:06 dnspython==2.5.0 23:12:06 docker==4.2.2 23:12:06 dogpile.cache==1.3.0 23:12:06 email-validator==2.1.0.post1 23:12:06 filelock==3.13.1 23:12:06 future==0.18.3 23:12:06 gitdb==4.0.11 23:12:06 GitPython==3.1.41 23:12:06 google-auth==2.26.2 23:12:06 httplib2==0.22.0 23:12:06 identify==2.5.33 23:12:06 idna==3.6 23:12:06 importlib-resources==1.5.0 23:12:06 iso8601==2.1.0 23:12:06 Jinja2==3.1.3 23:12:06 jmespath==1.0.1 23:12:06 jsonpatch==1.33 23:12:06 jsonpointer==2.4 23:12:06 jsonschema==4.21.1 23:12:06 jsonschema-specifications==2023.12.1 23:12:06 keystoneauth1==5.5.0 23:12:06 kubernetes==29.0.0 23:12:06 lftools==0.37.8 23:12:06 lxml==5.1.0 23:12:06 MarkupSafe==2.1.4 23:12:06 msgpack==1.0.7 23:12:06 multi_key_dict==2.0.3 23:12:06 munch==4.0.0 23:12:06 netaddr==0.10.1 23:12:06 netifaces==0.11.0 23:12:06 niet==1.4.2 23:12:06 nodeenv==1.8.0 23:12:06 oauth2client==4.1.3 23:12:06 oauthlib==3.2.2 23:12:06 openstacksdk==0.62.0 23:12:06 os-client-config==2.1.0 23:12:06 os-service-types==1.7.0 23:12:06 osc-lib==3.0.0 23:12:06 oslo.config==9.3.0 23:12:06 oslo.context==5.3.0 23:12:06 oslo.i18n==6.2.0 23:12:06 oslo.log==5.4.0 23:12:06 oslo.serialization==5.3.0 23:12:06 oslo.utils==7.0.0 23:12:06 packaging==23.2 23:12:06 pbr==6.0.0 23:12:06 platformdirs==4.1.0 23:12:06 prettytable==3.9.0 23:12:06 pyasn1==0.5.1 23:12:06 pyasn1-modules==0.3.0 23:12:06 pycparser==2.21 23:12:06 pygerrit2==2.0.15 23:12:06 PyGithub==2.1.1 23:12:06 pyinotify==0.9.6 23:12:06 PyJWT==2.8.0 23:12:06 PyNaCl==1.5.0 23:12:06 pyparsing==2.4.7 23:12:06 pyperclip==1.8.2 23:12:06 pyrsistent==0.20.0 23:12:06 python-cinderclient==9.4.0 23:12:06 python-dateutil==2.8.2 23:12:06 python-heatclient==3.4.0 23:12:06 python-jenkins==1.8.2 23:12:06 python-keystoneclient==5.3.0 23:12:06 python-magnumclient==4.3.0 23:12:06 python-novaclient==18.4.0 23:12:06 python-openstackclient==6.0.0 23:12:06 python-swiftclient==4.4.0 23:12:06 pytz==2023.3.post1 23:12:06 PyYAML==6.0.1 23:12:06 referencing==0.32.1 23:12:06 requests==2.31.0 23:12:06 requests-oauthlib==1.3.1 23:12:06 requestsexceptions==1.4.0 23:12:06 rfc3986==2.0.0 23:12:06 rpds-py==0.17.1 23:12:06 rsa==4.9 23:12:06 ruamel.yaml==0.18.5 23:12:06 ruamel.yaml.clib==0.2.8 23:12:06 s3transfer==0.10.0 23:12:06 simplejson==3.19.2 23:12:06 six==1.16.0 23:12:06 smmap==5.0.1 23:12:06 soupsieve==2.5 23:12:06 stevedore==5.1.0 23:12:06 tabulate==0.9.0 23:12:06 toml==0.10.2 23:12:06 tomlkit==0.12.3 23:12:06 tqdm==4.66.1 23:12:06 typing_extensions==4.9.0 23:12:06 tzdata==2023.4 23:12:06 urllib3==1.26.18 23:12:06 virtualenv==20.25.0 23:12:06 wcwidth==0.2.13 23:12:06 websocket-client==1.7.0 23:12:06 wrapt==1.16.0 23:12:06 xdg==6.0.0 23:12:06 xmltodict==0.13.0 23:12:06 yq==3.2.3 23:12:06 [EnvInject] - Injecting environment variables from a build step. 23:12:06 [EnvInject] - Injecting as environment variables the properties content 23:12:06 SET_JDK_VERSION=openjdk17 23:12:06 GIT_URL="git://cloud.onap.org/mirror" 23:12:06 23:12:06 [EnvInject] - Variables injected successfully. 23:12:06 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins12657488408940214260.sh 23:12:06 ---> update-java-alternatives.sh 23:12:06 ---> Updating Java version 23:12:07 ---> Ubuntu/Debian system detected 23:12:07 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:07 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:07 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:07 openjdk version "17.0.4" 2022-07-19 23:12:07 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:07 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:07 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:07 [EnvInject] - Injecting environment variables from a build step. 23:12:07 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:07 [EnvInject] - Variables injected successfully. 23:12:07 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins18200513181673086773.sh 23:12:07 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:07 + set +u 23:12:07 + save_set 23:12:07 + RUN_CSIT_SAVE_SET=ehxB 23:12:07 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:07 + '[' 1 -eq 0 ']' 23:12:07 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:07 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:07 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:07 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:07 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:07 + export ROBOT_VARIABLES= 23:12:07 + ROBOT_VARIABLES= 23:12:07 + export PROJECT=pap 23:12:07 + PROJECT=pap 23:12:07 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:07 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:07 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:07 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:07 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:07 + relax_set 23:12:07 + set +e 23:12:07 + set +o pipefail 23:12:07 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:07 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:07 +++ mktemp -d 23:12:07 ++ ROBOT_VENV=/tmp/tmp.6gxXuOa3EW 23:12:07 ++ echo ROBOT_VENV=/tmp/tmp.6gxXuOa3EW 23:12:07 +++ python3 --version 23:12:07 ++ echo 'Python version is: Python 3.6.9' 23:12:07 Python version is: Python 3.6.9 23:12:07 ++ python3 -m venv --clear /tmp/tmp.6gxXuOa3EW 23:12:09 ++ source /tmp/tmp.6gxXuOa3EW/bin/activate 23:12:09 +++ deactivate nondestructive 23:12:09 +++ '[' -n '' ']' 23:12:09 +++ '[' -n '' ']' 23:12:09 +++ '[' -n /bin/bash -o -n '' ']' 23:12:09 +++ hash -r 23:12:09 +++ '[' -n '' ']' 23:12:09 +++ unset VIRTUAL_ENV 23:12:09 +++ '[' '!' nondestructive = nondestructive ']' 23:12:09 +++ VIRTUAL_ENV=/tmp/tmp.6gxXuOa3EW 23:12:09 +++ export VIRTUAL_ENV 23:12:09 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:09 +++ PATH=/tmp/tmp.6gxXuOa3EW/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:09 +++ export PATH 23:12:09 +++ '[' -n '' ']' 23:12:09 +++ '[' -z '' ']' 23:12:09 +++ _OLD_VIRTUAL_PS1= 23:12:09 +++ '[' 'x(tmp.6gxXuOa3EW) ' '!=' x ']' 23:12:09 +++ PS1='(tmp.6gxXuOa3EW) ' 23:12:09 +++ export PS1 23:12:09 +++ '[' -n /bin/bash -o -n '' ']' 23:12:09 +++ hash -r 23:12:09 ++ set -exu 23:12:09 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:12 ++ echo 'Installing Python Requirements' 23:12:12 Installing Python Requirements 23:12:12 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:31 ++ python3 -m pip -qq freeze 23:12:32 bcrypt==4.0.1 23:12:32 beautifulsoup4==4.12.3 23:12:32 bitarray==2.9.2 23:12:32 certifi==2023.11.17 23:12:32 cffi==1.15.1 23:12:32 charset-normalizer==2.0.12 23:12:32 cryptography==40.0.2 23:12:32 decorator==5.1.1 23:12:32 elasticsearch==7.17.9 23:12:32 elasticsearch-dsl==7.4.1 23:12:32 enum34==1.1.10 23:12:32 idna==3.6 23:12:32 importlib-resources==5.4.0 23:12:32 ipaddr==2.2.0 23:12:32 isodate==0.6.1 23:12:32 jmespath==0.10.0 23:12:32 jsonpatch==1.32 23:12:32 jsonpath-rw==1.4.0 23:12:32 jsonpointer==2.3 23:12:32 lxml==5.1.0 23:12:32 netaddr==0.8.0 23:12:32 netifaces==0.11.0 23:12:32 odltools==0.1.28 23:12:32 paramiko==3.4.0 23:12:32 pkg_resources==0.0.0 23:12:32 ply==3.11 23:12:32 pyang==2.6.0 23:12:32 pyangbind==0.8.1 23:12:32 pycparser==2.21 23:12:32 pyhocon==0.3.60 23:12:32 PyNaCl==1.5.0 23:12:32 pyparsing==3.1.1 23:12:32 python-dateutil==2.8.2 23:12:32 regex==2023.8.8 23:12:32 requests==2.27.1 23:12:32 robotframework==6.1.1 23:12:32 robotframework-httplibrary==0.4.2 23:12:32 robotframework-pythonlibcore==3.0.0 23:12:32 robotframework-requests==0.9.4 23:12:32 robotframework-selenium2library==3.0.0 23:12:32 robotframework-seleniumlibrary==5.1.3 23:12:32 robotframework-sshlibrary==3.8.0 23:12:32 scapy==2.5.0 23:12:32 scp==0.14.5 23:12:32 selenium==3.141.0 23:12:32 six==1.16.0 23:12:32 soupsieve==2.3.2.post1 23:12:32 urllib3==1.26.18 23:12:32 waitress==2.0.0 23:12:32 WebOb==1.8.7 23:12:32 WebTest==3.0.0 23:12:32 zipp==3.6.0 23:12:32 ++ mkdir -p /tmp/tmp.6gxXuOa3EW/src/onap 23:12:32 ++ rm -rf /tmp/tmp.6gxXuOa3EW/src/onap/testsuite 23:12:32 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:38 ++ echo 'Installing python confluent-kafka library' 23:12:38 Installing python confluent-kafka library 23:12:38 ++ python3 -m pip install -qq confluent-kafka 23:12:39 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:39 Uninstall docker-py and reinstall docker. 23:12:39 ++ python3 -m pip uninstall -y -qq docker 23:12:40 ++ python3 -m pip install -U -qq docker 23:12:41 ++ python3 -m pip -qq freeze 23:12:42 bcrypt==4.0.1 23:12:42 beautifulsoup4==4.12.3 23:12:42 bitarray==2.9.2 23:12:42 certifi==2023.11.17 23:12:42 cffi==1.15.1 23:12:42 charset-normalizer==2.0.12 23:12:42 confluent-kafka==2.3.0 23:12:42 cryptography==40.0.2 23:12:42 decorator==5.1.1 23:12:42 deepdiff==5.7.0 23:12:42 dnspython==2.2.1 23:12:42 docker==5.0.3 23:12:42 elasticsearch==7.17.9 23:12:42 elasticsearch-dsl==7.4.1 23:12:42 enum34==1.1.10 23:12:42 future==0.18.3 23:12:42 idna==3.6 23:12:42 importlib-resources==5.4.0 23:12:42 ipaddr==2.2.0 23:12:42 isodate==0.6.1 23:12:42 Jinja2==3.0.3 23:12:42 jmespath==0.10.0 23:12:42 jsonpatch==1.32 23:12:42 jsonpath-rw==1.4.0 23:12:42 jsonpointer==2.3 23:12:42 kafka-python==2.0.2 23:12:42 lxml==5.1.0 23:12:42 MarkupSafe==2.0.1 23:12:42 more-itertools==5.0.0 23:12:42 netaddr==0.8.0 23:12:42 netifaces==0.11.0 23:12:42 odltools==0.1.28 23:12:42 ordered-set==4.0.2 23:12:42 paramiko==3.4.0 23:12:42 pbr==6.0.0 23:12:42 pkg_resources==0.0.0 23:12:42 ply==3.11 23:12:42 protobuf==3.19.6 23:12:42 pyang==2.6.0 23:12:42 pyangbind==0.8.1 23:12:42 pycparser==2.21 23:12:42 pyhocon==0.3.60 23:12:42 PyNaCl==1.5.0 23:12:42 pyparsing==3.1.1 23:12:42 python-dateutil==2.8.2 23:12:42 PyYAML==6.0.1 23:12:42 regex==2023.8.8 23:12:42 requests==2.27.1 23:12:42 robotframework==6.1.1 23:12:42 robotframework-httplibrary==0.4.2 23:12:42 robotframework-onap==0.6.0.dev105 23:12:42 robotframework-pythonlibcore==3.0.0 23:12:42 robotframework-requests==0.9.4 23:12:42 robotframework-selenium2library==3.0.0 23:12:42 robotframework-seleniumlibrary==5.1.3 23:12:42 robotframework-sshlibrary==3.8.0 23:12:42 robotlibcore-temp==1.0.2 23:12:42 scapy==2.5.0 23:12:42 scp==0.14.5 23:12:42 selenium==3.141.0 23:12:42 six==1.16.0 23:12:42 soupsieve==2.3.2.post1 23:12:42 urllib3==1.26.18 23:12:42 waitress==2.0.0 23:12:42 WebOb==1.8.7 23:12:42 websocket-client==1.3.1 23:12:42 WebTest==3.0.0 23:12:42 zipp==3.6.0 23:12:42 ++ uname 23:12:42 ++ grep -q Linux 23:12:42 ++ sudo apt-get -y -qq install libxml2-utils 23:12:42 + load_set 23:12:42 + _setopts=ehuxB 23:12:42 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:42 ++ tr : ' ' 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o braceexpand 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o hashall 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o interactive-comments 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o nounset 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o xtrace 23:12:42 ++ echo ehuxB 23:12:42 ++ sed 's/./& /g' 23:12:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:42 + set +e 23:12:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:42 + set +h 23:12:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:42 + set +u 23:12:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:42 + set +x 23:12:42 + source_safely /tmp/tmp.6gxXuOa3EW/bin/activate 23:12:42 + '[' -z /tmp/tmp.6gxXuOa3EW/bin/activate ']' 23:12:42 + relax_set 23:12:42 + set +e 23:12:42 + set +o pipefail 23:12:42 + . /tmp/tmp.6gxXuOa3EW/bin/activate 23:12:42 ++ deactivate nondestructive 23:12:42 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:42 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:42 ++ export PATH 23:12:42 ++ unset _OLD_VIRTUAL_PATH 23:12:42 ++ '[' -n '' ']' 23:12:42 ++ '[' -n /bin/bash -o -n '' ']' 23:12:42 ++ hash -r 23:12:42 ++ '[' -n '' ']' 23:12:42 ++ unset VIRTUAL_ENV 23:12:42 ++ '[' '!' nondestructive = nondestructive ']' 23:12:42 ++ VIRTUAL_ENV=/tmp/tmp.6gxXuOa3EW 23:12:42 ++ export VIRTUAL_ENV 23:12:42 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:42 ++ PATH=/tmp/tmp.6gxXuOa3EW/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:42 ++ export PATH 23:12:42 ++ '[' -n '' ']' 23:12:42 ++ '[' -z '' ']' 23:12:42 ++ _OLD_VIRTUAL_PS1='(tmp.6gxXuOa3EW) ' 23:12:42 ++ '[' 'x(tmp.6gxXuOa3EW) ' '!=' x ']' 23:12:42 ++ PS1='(tmp.6gxXuOa3EW) (tmp.6gxXuOa3EW) ' 23:12:42 ++ export PS1 23:12:42 ++ '[' -n /bin/bash -o -n '' ']' 23:12:42 ++ hash -r 23:12:42 + load_set 23:12:42 + _setopts=hxB 23:12:42 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:42 ++ tr : ' ' 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o braceexpand 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o hashall 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o interactive-comments 23:12:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:42 + set +o xtrace 23:12:42 ++ echo hxB 23:12:42 ++ sed 's/./& /g' 23:12:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:42 + set +h 23:12:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:42 + set +x 23:12:42 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:42 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:42 + export TEST_OPTIONS= 23:12:42 + TEST_OPTIONS= 23:12:42 ++ mktemp -d 23:12:42 + WORKDIR=/tmp/tmp.C9xkkUvsOC 23:12:42 + cd /tmp/tmp.C9xkkUvsOC 23:12:42 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:43 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:43 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:43 Configure a credential helper to remove this warning. See 23:12:43 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:43 23:12:43 Login Succeeded 23:12:43 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:43 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:43 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:43 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:43 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:43 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:43 + relax_set 23:12:43 + set +e 23:12:43 + set +o pipefail 23:12:43 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:43 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:43 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:43 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:43 +++ GERRIT_BRANCH=master 23:12:43 +++ echo GERRIT_BRANCH=master 23:12:43 GERRIT_BRANCH=master 23:12:43 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:43 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:43 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:43 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:45 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:45 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:45 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:45 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:45 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:45 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:45 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:45 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:45 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:45 +++ grafana=false 23:12:45 +++ gui=false 23:12:45 +++ [[ 2 -gt 0 ]] 23:12:45 +++ key=apex-pdp 23:12:45 +++ case $key in 23:12:45 +++ echo apex-pdp 23:12:45 apex-pdp 23:12:45 +++ component=apex-pdp 23:12:45 +++ shift 23:12:45 +++ [[ 1 -gt 0 ]] 23:12:45 +++ key=--grafana 23:12:45 +++ case $key in 23:12:45 +++ grafana=true 23:12:45 +++ shift 23:12:45 +++ [[ 0 -gt 0 ]] 23:12:45 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:45 +++ echo 'Configuring docker compose...' 23:12:45 Configuring docker compose... 23:12:45 +++ source export-ports.sh 23:12:45 +++ source get-versions.sh 23:12:47 +++ '[' -z pap ']' 23:12:47 +++ '[' -n apex-pdp ']' 23:12:47 +++ '[' apex-pdp == logs ']' 23:12:47 +++ '[' true = true ']' 23:12:47 +++ echo 'Starting apex-pdp application with Grafana' 23:12:47 Starting apex-pdp application with Grafana 23:12:47 +++ docker-compose up -d apex-pdp grafana 23:12:49 Creating network "compose_default" with the default driver 23:12:50 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:51 latest: Pulling from prom/prometheus 23:13:01 Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 23:13:01 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:13:01 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:13:01 latest: Pulling from grafana/grafana 23:13:06 Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 23:13:06 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:06 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:06 10.10.2: Pulling from mariadb 23:13:11 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:11 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:11 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 23:13:11 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:16 Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 23:13:16 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT 23:13:16 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:16 latest: Pulling from confluentinc/cp-zookeeper 23:13:27 Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 23:13:27 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:27 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:29 latest: Pulling from confluentinc/cp-kafka 23:13:33 Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 23:13:33 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:33 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 23:13:33 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:47 Digest: sha256:eb47623eeab9aad8524ecc877b6708ae74b57f9f3cfe77554ad0d1521491cb5d 23:13:47 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT 23:13:47 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 23:13:47 3.1.1-SNAPSHOT: Pulling from onap/policy-api 23:13:50 Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e 23:13:50 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT 23:13:50 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.0)... 23:13:50 3.1.0: Pulling from onap/policy-pap 23:14:00 Digest: sha256:ff420a18fdd0393b657dcd1ae9e545437067fe5610606e3999888c21302a6231 23:14:00 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.0 23:14:00 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 23:14:00 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:14:09 Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b 23:14:09 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT 23:14:09 Creating mariadb ... 23:14:09 Creating compose_zookeeper_1 ... 23:14:09 Creating simulator ... 23:14:09 Creating prometheus ... 23:14:24 Creating mariadb ... done 23:14:24 Creating policy-db-migrator ... 23:14:25 Creating compose_zookeeper_1 ... done 23:14:25 Creating kafka ... 23:14:26 Creating kafka ... done 23:14:27 Creating simulator ... done 23:14:28 Creating policy-db-migrator ... done 23:14:28 Creating policy-api ... 23:14:29 Creating prometheus ... done 23:14:29 Creating grafana ... 23:14:30 Creating grafana ... done 23:14:31 Creating policy-api ... done 23:14:31 Creating policy-pap ... 23:14:32 Creating policy-pap ... done 23:14:32 Creating policy-apex-pdp ... 23:14:33 Creating policy-apex-pdp ... done 23:14:33 +++ echo 'Prometheus server: http://localhost:30259' 23:14:33 Prometheus server: http://localhost:30259 23:14:33 +++ echo 'Grafana server: http://localhost:30269' 23:14:33 Grafana server: http://localhost:30269 23:14:33 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:33 ++ sleep 10 23:14:43 ++ unset http_proxy https_proxy 23:14:43 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:43 Waiting for REST to come up on localhost port 30003... 23:14:43 NAMES STATUS 23:14:43 policy-apex-pdp Up 10 seconds 23:14:43 policy-pap Up 11 seconds 23:14:43 grafana Up 13 seconds 23:14:43 policy-api Up 11 seconds 23:14:43 kafka Up 17 seconds 23:14:43 prometheus Up 14 seconds 23:14:43 compose_zookeeper_1 Up 18 seconds 23:14:43 simulator Up 16 seconds 23:14:43 mariadb Up 19 seconds 23:14:48 NAMES STATUS 23:14:48 policy-apex-pdp Up 15 seconds 23:14:48 policy-pap Up 16 seconds 23:14:48 grafana Up 18 seconds 23:14:48 policy-api Up 17 seconds 23:14:48 kafka Up 22 seconds 23:14:48 prometheus Up 19 seconds 23:14:48 compose_zookeeper_1 Up 23 seconds 23:14:48 simulator Up 21 seconds 23:14:48 mariadb Up 24 seconds 23:14:53 NAMES STATUS 23:14:53 policy-apex-pdp Up 20 seconds 23:14:53 policy-pap Up 21 seconds 23:14:53 grafana Up 23 seconds 23:14:53 policy-api Up 22 seconds 23:14:53 kafka Up 27 seconds 23:14:53 prometheus Up 24 seconds 23:14:53 compose_zookeeper_1 Up 28 seconds 23:14:53 simulator Up 26 seconds 23:14:53 mariadb Up 29 seconds 23:14:59 NAMES STATUS 23:14:59 policy-apex-pdp Up 25 seconds 23:14:59 policy-pap Up 26 seconds 23:14:59 grafana Up 28 seconds 23:14:59 policy-api Up 27 seconds 23:14:59 kafka Up 32 seconds 23:14:59 prometheus Up 29 seconds 23:14:59 compose_zookeeper_1 Up 33 seconds 23:14:59 simulator Up 31 seconds 23:14:59 mariadb Up 34 seconds 23:15:04 NAMES STATUS 23:15:04 policy-apex-pdp Up 30 seconds 23:15:04 policy-pap Up 31 seconds 23:15:04 grafana Up 33 seconds 23:15:04 policy-api Up 32 seconds 23:15:04 kafka Up 37 seconds 23:15:04 prometheus Up 34 seconds 23:15:04 compose_zookeeper_1 Up 38 seconds 23:15:04 simulator Up 36 seconds 23:15:04 mariadb Up 39 seconds 23:15:04 ++ export 'SUITES=pap-test.robot 23:15:04 pap-slas.robot' 23:15:04 ++ SUITES='pap-test.robot 23:15:04 pap-slas.robot' 23:15:04 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:04 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:04 + load_set 23:15:04 + _setopts=hxB 23:15:04 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:15:04 ++ tr : ' ' 23:15:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:04 + set +o braceexpand 23:15:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:04 + set +o hashall 23:15:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:04 + set +o interactive-comments 23:15:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:04 + set +o xtrace 23:15:04 ++ echo hxB 23:15:04 ++ sed 's/./& /g' 23:15:04 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:04 + set +h 23:15:04 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:04 + set +x 23:15:04 + docker_stats 23:15:04 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:15:04 ++ uname -s 23:15:04 + '[' Linux == Darwin ']' 23:15:04 + sh -c 'top -bn1 | head -3' 23:15:04 top - 23:15:04 up 4 min, 0 users, load average: 2.98, 1.39, 0.56 23:15:04 Tasks: 209 total, 1 running, 130 sleeping, 0 stopped, 0 zombie 23:15:04 %Cpu(s): 12.7 us, 2.8 sy, 0.0 ni, 78.7 id, 5.7 wa, 0.0 hi, 0.1 si, 0.1 st 23:15:04 + echo 23:15:04 23:15:04 + sh -c 'free -h' 23:15:04 + echo 23:15:04 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:15:04 total used free shared buff/cache available 23:15:04 Mem: 31G 2.8G 21G 1.3M 6.7G 28G 23:15:04 Swap: 1.0G 0B 1.0G 23:15:04 23:15:04 NAMES STATUS 23:15:04 policy-apex-pdp Up 30 seconds 23:15:04 policy-pap Up 31 seconds 23:15:04 grafana Up 33 seconds 23:15:04 policy-api Up 32 seconds 23:15:04 kafka Up 37 seconds 23:15:04 prometheus Up 34 seconds 23:15:04 compose_zookeeper_1 Up 38 seconds 23:15:04 simulator Up 36 seconds 23:15:04 mariadb Up 39 seconds 23:15:04 + echo 23:15:04 23:15:04 + docker stats --no-stream 23:15:07 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:15:07 50b647c65335 policy-apex-pdp 277.07% 190.2MiB / 31.41GiB 0.59% 7.16kB / 6.89kB 0B / 0B 49 23:15:07 1db58b627eec policy-pap 1.78% 520.4MiB / 31.41GiB 1.62% 26.8kB / 28.9kB 0B / 181MB 62 23:15:07 2c9ab507ee0b grafana 0.02% 51.51MiB / 31.41GiB 0.16% 18.2kB / 3.66kB 0B / 23.9MB 14 23:15:07 f9d8745ecf88 policy-api 0.29% 755.5MiB / 31.41GiB 2.35% 999kB / 710kB 0B / 0B 54 23:15:07 48182883a08d kafka 18.79% 361.8MiB / 31.41GiB 1.12% 64.2kB / 67.5kB 0B / 508kB 82 23:15:07 2fe872a509fe prometheus 0.26% 18.66MiB / 31.41GiB 0.06% 1.6kB / 474B 205kB / 0B 11 23:15:07 8dc9896bd9c7 compose_zookeeper_1 0.11% 98.25MiB / 31.41GiB 0.31% 52.9kB / 45.8kB 0B / 377kB 60 23:15:07 3707560567d3 simulator 0.09% 124.2MiB / 31.41GiB 0.39% 1.36kB / 0B 0B / 0B 76 23:15:07 a8df4284cc57 mariadb 0.01% 101.9MiB / 31.41GiB 0.32% 996kB / 1.18MB 11MB / 67.9MB 40 23:15:07 + echo 23:15:07 23:15:07 + cd /tmp/tmp.C9xkkUvsOC 23:15:07 + echo 'Reading the testplan:' 23:15:07 Reading the testplan: 23:15:07 + echo 'pap-test.robot 23:15:07 pap-slas.robot' 23:15:07 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:15:07 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:15:07 + cat testplan.txt 23:15:07 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:15:07 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:07 ++ xargs 23:15:07 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:15:07 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:07 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:07 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:07 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:15:07 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:15:07 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:15:07 + relax_set 23:15:07 + set +e 23:15:07 + set +o pipefail 23:15:07 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:07 ============================================================================== 23:15:07 pap 23:15:07 ============================================================================== 23:15:07 pap.Pap-Test 23:15:07 ============================================================================== 23:15:08 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:15:08 ------------------------------------------------------------------------------ 23:15:08 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:15:08 ------------------------------------------------------------------------------ 23:15:09 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:15:09 ------------------------------------------------------------------------------ 23:15:09 Healthcheck :: Verify policy pap health check | PASS | 23:15:09 ------------------------------------------------------------------------------ 23:15:30 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:31 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:31 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:31 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:31 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:31 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:32 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:32 ------------------------------------------------------------------------------ 23:15:32 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:32 ------------------------------------------------------------------------------ 23:15:32 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:32 ------------------------------------------------------------------------------ 23:15:32 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:32 ------------------------------------------------------------------------------ 23:15:33 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:33 ------------------------------------------------------------------------------ 23:15:33 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:33 ------------------------------------------------------------------------------ 23:15:53 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:53 ------------------------------------------------------------------------------ 23:15:53 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:53 ------------------------------------------------------------------------------ 23:15:53 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:53 ------------------------------------------------------------------------------ 23:15:53 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:53 ------------------------------------------------------------------------------ 23:15:54 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:54 ------------------------------------------------------------------------------ 23:15:54 pap.Pap-Test | PASS | 23:15:54 22 tests, 22 passed, 0 failed 23:15:54 ============================================================================== 23:15:54 pap.Pap-Slas 23:15:54 ============================================================================== 23:16:54 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:54 ------------------------------------------------------------------------------ 23:16:54 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:54 ------------------------------------------------------------------------------ 23:16:54 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:54 ------------------------------------------------------------------------------ 23:16:54 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:54 ------------------------------------------------------------------------------ 23:16:54 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:54 ------------------------------------------------------------------------------ 23:16:54 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:54 ------------------------------------------------------------------------------ 23:16:54 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:54 ------------------------------------------------------------------------------ 23:16:54 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:54 ------------------------------------------------------------------------------ 23:16:54 pap.Pap-Slas | PASS | 23:16:54 8 tests, 8 passed, 0 failed 23:16:54 ============================================================================== 23:16:54 pap | PASS | 23:16:54 30 tests, 30 passed, 0 failed 23:16:54 ============================================================================== 23:16:54 Output: /tmp/tmp.C9xkkUvsOC/output.xml 23:16:54 Log: /tmp/tmp.C9xkkUvsOC/log.html 23:16:54 Report: /tmp/tmp.C9xkkUvsOC/report.html 23:16:54 + RESULT=0 23:16:54 + load_set 23:16:54 + _setopts=hxB 23:16:54 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:54 ++ tr : ' ' 23:16:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:54 + set +o braceexpand 23:16:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:54 + set +o hashall 23:16:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:54 + set +o interactive-comments 23:16:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:54 + set +o xtrace 23:16:54 ++ echo hxB 23:16:54 ++ sed 's/./& /g' 23:16:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:54 + set +h 23:16:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:54 + set +x 23:16:54 + echo 'RESULT: 0' 23:16:54 RESULT: 0 23:16:54 + exit 0 23:16:54 + on_exit 23:16:54 + rc=0 23:16:54 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:54 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:54 NAMES STATUS 23:16:54 policy-apex-pdp Up 2 minutes 23:16:54 policy-pap Up 2 minutes 23:16:54 grafana Up 2 minutes 23:16:54 policy-api Up 2 minutes 23:16:54 kafka Up 2 minutes 23:16:54 prometheus Up 2 minutes 23:16:54 compose_zookeeper_1 Up 2 minutes 23:16:54 simulator Up 2 minutes 23:16:54 mariadb Up 2 minutes 23:16:54 + docker_stats 23:16:54 ++ uname -s 23:16:54 + '[' Linux == Darwin ']' 23:16:54 + sh -c 'top -bn1 | head -3' 23:16:54 top - 23:16:54 up 6 min, 0 users, load average: 0.68, 1.05, 0.53 23:16:54 Tasks: 200 total, 1 running, 128 sleeping, 0 stopped, 0 zombie 23:16:54 %Cpu(s): 10.7 us, 2.2 sy, 0.0 ni, 82.9 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:54 + echo 23:16:54 23:16:54 + sh -c 'free -h' 23:16:54 total used free shared buff/cache available 23:16:54 Mem: 31G 2.9G 21G 1.3M 6.7G 28G 23:16:54 Swap: 1.0G 0B 1.0G 23:16:54 + echo 23:16:54 23:16:54 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:54 NAMES STATUS 23:16:54 policy-apex-pdp Up 2 minutes 23:16:54 policy-pap Up 2 minutes 23:16:54 grafana Up 2 minutes 23:16:54 policy-api Up 2 minutes 23:16:54 kafka Up 2 minutes 23:16:54 prometheus Up 2 minutes 23:16:54 compose_zookeeper_1 Up 2 minutes 23:16:54 simulator Up 2 minutes 23:16:54 mariadb Up 2 minutes 23:16:54 + echo 23:16:54 23:16:54 + docker stats --no-stream 23:16:57 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:57 50b647c65335 policy-apex-pdp 2.24% 182.6MiB / 31.41GiB 0.57% 56.1kB / 90.4kB 0B / 0B 50 23:16:57 1db58b627eec policy-pap 1.10% 490.5MiB / 31.41GiB 1.52% 2.33MB / 815kB 0B / 181MB 64 23:16:57 2c9ab507ee0b grafana 0.03% 52.84MiB / 31.41GiB 0.16% 19.2kB / 4.69kB 0B / 23.9MB 14 23:16:57 f9d8745ecf88 policy-api 0.09% 774MiB / 31.41GiB 2.41% 2.49MB / 1.26MB 0B / 0B 55 23:16:57 48182883a08d kafka 1.21% 379.5MiB / 31.41GiB 1.18% 233kB / 210kB 0B / 606kB 83 23:16:57 2fe872a509fe prometheus 0.44% 25.57MiB / 31.41GiB 0.08% 191kB / 11kB 205kB / 0B 13 23:16:57 8dc9896bd9c7 compose_zookeeper_1 0.14% 98.3MiB / 31.41GiB 0.31% 55.7kB / 47.3kB 0B / 377kB 60 23:16:57 3707560567d3 simulator 0.07% 124.2MiB / 31.41GiB 0.39% 1.58kB / 0B 0B / 0B 76 23:16:57 a8df4284cc57 mariadb 0.01% 103.2MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.2MB 28 23:16:57 + echo 23:16:57 23:16:57 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:57 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:57 + relax_set 23:16:57 + set +e 23:16:57 + set +o pipefail 23:16:57 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:57 ++ echo 'Shut down started!' 23:16:57 Shut down started! 23:16:57 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:57 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:57 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:57 ++ source export-ports.sh 23:16:57 ++ source get-versions.sh 23:16:59 ++ echo 'Collecting logs from docker compose containers...' 23:16:59 Collecting logs from docker compose containers... 23:16:59 ++ docker-compose logs 23:17:00 ++ cat docker_compose.log 23:17:00 Attaching to policy-apex-pdp, policy-pap, grafana, policy-api, kafka, policy-db-migrator, prometheus, compose_zookeeper_1, simulator, mariadb 23:17:00 mariadb | 2024-01-21 23:14:24+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:17:00 mariadb | 2024-01-21 23:14:24+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:17:00 mariadb | 2024-01-21 23:14:24+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:17:00 mariadb | 2024-01-21 23:14:25+00:00 [Note] [Entrypoint]: Initializing database files 23:17:00 mariadb | 2024-01-21 23:14:25 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:00 mariadb | 2024-01-21 23:14:25 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:00 mariadb | 2024-01-21 23:14:25 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:00 mariadb | 23:17:00 mariadb | 23:17:00 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:17:00 mariadb | To do so, start the server, then issue the following command: 23:17:00 mariadb | 23:17:00 mariadb | '/usr/bin/mysql_secure_installation' 23:17:00 mariadb | 23:17:00 mariadb | which will also give you the option of removing the test 23:17:00 mariadb | databases and anonymous user created by default. This is 23:17:00 mariadb | strongly recommended for production servers. 23:17:00 mariadb | 23:17:00 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:17:00 mariadb | 23:17:00 mariadb | Please report any problems at https://mariadb.org/jira 23:17:00 mariadb | 23:17:00 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:17:00 mariadb | 23:17:00 mariadb | Consider joining MariaDB's strong and vibrant community: 23:17:00 mariadb | https://mariadb.org/get-involved/ 23:17:00 mariadb | 23:17:00 mariadb | 2024-01-21 23:14:26+00:00 [Note] [Entrypoint]: Database files initialized 23:17:00 mariadb | 2024-01-21 23:14:26+00:00 [Note] [Entrypoint]: Starting temporary server 23:17:00 mariadb | 2024-01-21 23:14:26+00:00 [Note] [Entrypoint]: Waiting for server startup 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Number of transaction pools: 1 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Completed initialization of buffer pool 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: 128 rollback segments are active. 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] Plugin 'FEEDBACK' is disabled. 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:17:00 mariadb | 2024-01-21 23:14:26 0 [Note] mariadbd: ready for connections. 23:17:00 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:17:00 kafka | ===> User 23:17:00 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:17:00 kafka | ===> Configuring ... 23:17:00 kafka | Running in Zookeeper mode... 23:17:00 kafka | ===> Running preflight checks ... 23:17:00 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:17:00 kafka | ===> Check if Zookeeper is healthy ... 23:17:00 kafka | [2024-01-21 23:14:30,376] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,376] INFO Client environment:host.name=48182883a08d (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,376] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,376] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,376] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,376] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,377] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,378] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,378] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,380] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,384] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:17:00 kafka | [2024-01-21 23:14:30,388] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:17:00 kafka | [2024-01-21 23:14:30,395] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:17:00 kafka | [2024-01-21 23:14:30,411] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:17:00 kafka | [2024-01-21 23:14:30,411] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:17:00 kafka | [2024-01-21 23:14:30,421] INFO Socket connection established, initiating session, client: /172.17.0.7:55618, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:17:00 kafka | [2024-01-21 23:14:30,459] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003ef040000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:17:00 kafka | [2024-01-21 23:14:30,583] INFO Session: 0x1000003ef040000 closed (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:30,583] INFO EventThread shut down for session: 0x1000003ef040000 (org.apache.zookeeper.ClientCnxn) 23:17:00 kafka | Using log4j config /etc/kafka/log4j.properties 23:17:00 kafka | ===> Launching ... 23:17:00 kafka | ===> Launching kafka ... 23:17:00 kafka | [2024-01-21 23:14:31,245] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:17:00 kafka | [2024-01-21 23:14:31,557] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:17:00 kafka | [2024-01-21 23:14:31,650] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:17:00 kafka | [2024-01-21 23:14:31,651] INFO starting (kafka.server.KafkaServer) 23:17:00 kafka | [2024-01-21 23:14:31,652] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:17:00 kafka | [2024-01-21 23:14:31,668] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:17:00 mariadb | 2024-01-21 23:14:27+00:00 [Note] [Entrypoint]: Temporary server started. 23:17:00 mariadb | 2024-01-21 23:14:29+00:00 [Note] [Entrypoint]: Creating user policy_user 23:17:00 mariadb | 2024-01-21 23:14:29+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:17:00 mariadb | 23:17:00 mariadb | 2024-01-21 23:14:29+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:17:00 mariadb | 23:17:00 mariadb | 2024-01-21 23:14:29+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:17:00 mariadb | #!/bin/bash -xv 23:17:00 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:17:00 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:17:00 mariadb | # 23:17:00 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:17:00 mariadb | # you may not use this file except in compliance with the License. 23:17:00 mariadb | # You may obtain a copy of the License at 23:17:00 mariadb | # 23:17:00 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:17:00 mariadb | # 23:17:00 mariadb | # Unless required by applicable law or agreed to in writing, software 23:17:00 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:17:00 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:17:00 mariadb | # See the License for the specific language governing permissions and 23:17:00 mariadb | # limitations under the License. 23:17:00 mariadb | 23:17:00 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:00 mariadb | do 23:17:00 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:17:00 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:17:00 mariadb | done 23:17:00 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:00 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:17:00 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:00 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:00 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:17:00 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:00 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:00 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:17:00 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:00 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:00 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:17:00 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:00 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:host.name=48182883a08d (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 23:17:00 kafka | [2024-01-21 23:14:31,673] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:17:00 zookeeper_1 | ===> User 23:17:00 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:17:00 zookeeper_1 | ===> Configuring ... 23:17:00 zookeeper_1 | ===> Running preflight checks ... 23:17:00 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:17:00 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:17:00 zookeeper_1 | ===> Launching ... 23:17:00 zookeeper_1 | ===> Launching zookeeper ... 23:17:00 zookeeper_1 | [2024-01-21 23:14:28,998] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,005] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,006] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,006] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,006] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,007] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,007] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,007] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,007] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,008] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,009] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,010] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,010] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,010] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,010] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,010] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,024] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,026] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,026] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,029] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,039] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:host.name=8dc9896bd9c7 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,040] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:17:00 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:00 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:00 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:17:00 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:00 mariadb | 23:17:00 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:17:00 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:17:00 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:17:00 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:17:00 mariadb | 23:17:00 mariadb | 2024-01-21 23:14:30+00:00 [Note] [Entrypoint]: Stopping temporary server 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: FTS optimize thread exiting. 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Starting shutdown... 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Buffer pool(s) dump completed at 240121 23:14:30 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Shutdown completed; log sequence number 332242; transaction id 298 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd: Shutdown complete 23:17:00 mariadb | 23:17:00 mariadb | 2024-01-21 23:14:30+00:00 [Note] [Entrypoint]: Temporary server stopped 23:17:00 mariadb | 23:17:00 mariadb | 2024-01-21 23:14:30+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:17:00 mariadb | 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Number of transaction pools: 1 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Completed initialization of buffer pool 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: 128 rollback segments are active. 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: log sequence number 332242; transaction id 299 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] Plugin 'FEEDBACK' is disabled. 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,041] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,042] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,043] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,043] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,044] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,044] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,045] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,047] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,047] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 policy-api | Waiting for mariadb port 3306... 23:17:00 policy-api | mariadb (172.17.0.5:3306) open 23:17:00 policy-api | Waiting for policy-db-migrator port 6824... 23:17:00 policy-api | policy-db-migrator (172.17.0.6:6824) open 23:17:00 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:17:00 policy-api | 23:17:00 policy-api | . ____ _ __ _ _ 23:17:00 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:17:00 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:17:00 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:17:00 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:17:00 policy-api | =========|_|==============|___/=/_/_/_/ 23:17:00 policy-api | :: Spring Boot :: (v3.1.4) 23:17:00 policy-api | 23:17:00 policy-api | [2024-01-21T23:14:39.260+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 16 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:17:00 policy-api | [2024-01-21T23:14:39.262+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:17:00 policy-api | [2024-01-21T23:14:41.158+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:17:00 policy-api | [2024-01-21T23:14:41.264+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 6 JPA repository interfaces. 23:17:00 policy-api | [2024-01-21T23:14:41.728+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:17:00 policy-api | [2024-01-21T23:14:41.728+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:17:00 policy-api | [2024-01-21T23:14:42.467+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:17:00 policy-api | [2024-01-21T23:14:42.479+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:17:00 policy-api | [2024-01-21T23:14:42.481+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:17:00 policy-api | [2024-01-21T23:14:42.481+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] 23:17:00 policy-api | [2024-01-21T23:14:42.595+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:17:00 policy-api | [2024-01-21T23:14:42.595+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3250 ms 23:17:00 policy-api | [2024-01-21T23:14:43.083+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:17:00 policy-api | [2024-01-21T23:14:43.178+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:17:00 policy-api | [2024-01-21T23:14:43.182+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:17:00 policy-api | [2024-01-21T23:14:43.237+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:17:00 policy-api | [2024-01-21T23:14:43.610+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:17:00 policy-api | [2024-01-21T23:14:43.632+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:17:00 policy-api | [2024-01-21T23:14:43.743+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 23:17:00 policy-api | [2024-01-21T23:14:43.746+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:17:00 policy-pap | Waiting for mariadb port 3306... 23:17:00 policy-pap | mariadb (172.17.0.5:3306) open 23:17:00 policy-pap | Waiting for kafka port 9092... 23:17:00 policy-pap | kafka (172.17.0.7:9092) open 23:17:00 policy-pap | Waiting for api port 6969... 23:17:00 policy-pap | api (172.17.0.8:6969) open 23:17:00 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:17:00 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:17:00 policy-pap | 23:17:00 policy-pap | . ____ _ __ _ _ 23:17:00 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:17:00 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:17:00 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:17:00 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:17:00 policy-pap | =========|_|==============|___/=/_/_/_/ 23:17:00 policy-pap | :: Spring Boot :: (v3.1.4) 23:17:00 policy-pap | 23:17:00 policy-pap | [2024-01-21T23:14:53.333+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 29 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:17:00 policy-pap | [2024-01-21T23:14:53.335+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:17:00 policy-pap | [2024-01-21T23:14:55.253+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:17:00 policy-pap | [2024-01-21T23:14:55.365+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 101 ms. Found 7 JPA repository interfaces. 23:17:00 policy-pap | [2024-01-21T23:14:55.774+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:17:00 policy-pap | [2024-01-21T23:14:55.775+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:17:00 policy-pap | [2024-01-21T23:14:56.557+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:17:00 policy-pap | [2024-01-21T23:14:56.568+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:17:00 policy-pap | [2024-01-21T23:14:56.572+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:17:00 policy-pap | [2024-01-21T23:14:56.572+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] 23:17:00 policy-pap | [2024-01-21T23:14:56.684+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:17:00 policy-pap | [2024-01-21T23:14:56.684+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3274 ms 23:17:00 policy-pap | [2024-01-21T23:14:57.181+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:17:00 policy-pap | [2024-01-21T23:14:57.281+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:17:00 policy-pap | [2024-01-21T23:14:57.285+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,047] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.737872041Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z 23:17:00 policy-apex-pdp | Waiting for mariadb port 3306... 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:17:00 policy-api | [2024-01-21T23:14:43.778+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 23:17:00 policy-pap | [2024-01-21T23:14:57.350+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:17:00 policy-pap | [2024-01-21T23:14:57.715+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738085243Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:17:00 policy-db-migrator | Waiting for mariadb port 3306... 23:17:00 policy-apex-pdp | Waiting for kafka port 9092... 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] Server socket created on IP: '0.0.0.0'. 23:17:00 kafka | [2024-01-21 23:14:31,674] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:17:00 prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:17:00 policy-api | [2024-01-21T23:14:43.780+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 23:17:00 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,047] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:17:00 policy-pap | [2024-01-21T23:14:57.738+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738096203Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:17:00 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:17:00 policy-apex-pdp | mariadb (172.17.0.5:3306) open 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] Server socket created on IP: '::'. 23:17:00 kafka | [2024-01-21 23:14:31,675] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@32193bea (org.apache.zookeeper.ZooKeeper) 23:17:00 prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" 23:17:00 policy-api | [2024-01-21T23:14:45.841+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:17:00 simulator | overriding logback.xml 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,048] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 policy-pap | [2024-01-21T23:14:57.865+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2b03d52f 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738100253Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:17:00 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:17:00 policy-apex-pdp | kafka (172.17.0.7:9092) open 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] mariadbd: ready for connections. 23:17:00 kafka | [2024-01-21 23:14:31,679] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:17:00 prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" 23:17:00 policy-api | [2024-01-21T23:14:45.845+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:17:00 simulator | 2024-01-21 23:14:28,196 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,083] INFO Logging initialized @605ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:17:00 policy-pap | [2024-01-21T23:14:57.868+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738104703Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:17:00 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:17:00 policy-apex-pdp | Waiting for pap port 6969... 23:17:00 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:17:00 kafka | [2024-01-21 23:14:31,686] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:17:00 prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:17:00 policy-api | [2024-01-21T23:14:47.199+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:17:00 simulator | 2024-01-21 23:14:28,275 INFO org.onap.policy.models.simulators starting 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,220] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:17:00 policy-pap | [2024-01-21T23:14:57.921+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738107443Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:17:00 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! 23:17:00 policy-apex-pdp | pap (172.17.0.10:6969) open 23:17:00 mariadb | 2024-01-21 23:14:30 0 [Note] InnoDB: Buffer pool(s) load completed at 240121 23:14:30 23:17:00 kafka | [2024-01-21 23:14:31,688] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:17:00 prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" 23:17:00 policy-api | [2024-01-21T23:14:48.067+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:17:00 simulator | 2024-01-21 23:14:28,275 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,220] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:17:00 policy-pap | [2024-01-21T23:14:57.923+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738110693Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:17:00 policy-db-migrator | 321 blocks 23:17:00 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:17:00 mariadb | 2024-01-21 23:14:31 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 23:17:00 kafka | [2024-01-21 23:14:31,696] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:17:00 prometheus | ts=2024-01-21T23:14:29.707Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:17:00 policy-api | [2024-01-21T23:14:49.254+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:17:00 simulator | 2024-01-21 23:14:28,497 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,240] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:17:00 policy-pap | [2024-01-21T23:14:59.973+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738113934Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:17:00 policy-db-migrator | Preparing upgrade release version: 0800 23:17:00 policy-apex-pdp | [2024-01-21T23:15:04.751+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:17:00 mariadb | 2024-01-21 23:14:31 15 [Warning] Aborted connection 15 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:17:00 kafka | [2024-01-21 23:14:31,704] INFO Socket connection established, initiating session, client: /172.17.0.7:55620, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:17:00 prometheus | ts=2024-01-21T23:14:29.713Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:17:00 policy-api | [2024-01-21T23:14:49.492+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@58a01e47, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6149184e, org.springframework.security.web.context.SecurityContextHolderFilter@234a08ea, org.springframework.security.web.header.HeaderWriterFilter@2e26841f, org.springframework.security.web.authentication.logout.LogoutFilter@c7a7d3, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3413effc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56d3e4a9, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2542d320, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6f3a8d5e, org.springframework.security.web.access.ExceptionTranslationFilter@19bd1f98, org.springframework.security.web.access.intercept.AuthorizationFilter@729f8c5d] 23:17:00 simulator | 2024-01-21 23:14:28,499 INFO org.onap.policy.models.simulators starting A&AI simulator 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,278] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:17:00 policy-pap | [2024-01-21T23:14:59.977+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738117864Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:17:00 policy-db-migrator | Preparing upgrade release version: 0900 23:17:00 policy-apex-pdp | [2024-01-21T23:15:04.959+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:00 mariadb | 2024-01-21 23:14:32 61 [Warning] Aborted connection 61 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:17:00 kafka | [2024-01-21 23:14:31,713] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003ef040001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:17:00 prometheus | ts=2024-01-21T23:14:29.714Z caller=main.go:1039 level=info msg="Starting TSDB ..." 23:17:00 policy-api | [2024-01-21T23:14:50.506+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:17:00 simulator | 2024-01-21 23:14:28,621 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:00 policy-pap | [2024-01-21T23:15:00.625+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738121414Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:17:00 policy-db-migrator | Preparing upgrade release version: 1000 23:17:00 policy-apex-pdp | allow.auto.create.topics = true 23:17:00 mariadb | 2024-01-21 23:14:33 108 [Warning] Aborted connection 108 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:17:00 kafka | [2024-01-21 23:14:31,721] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:17:00 prometheus | ts=2024-01-21T23:14:29.717Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 23:17:00 policy-api | [2024-01-21T23:14:50.572+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:17:00 simulator | 2024-01-21 23:14:28,632 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,278] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:17:00 policy-pap | [2024-01-21T23:15:01.244+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738125274Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:17:00 policy-db-migrator | Preparing upgrade release version: 1100 23:17:00 policy-apex-pdp | auto.commit.interval.ms = 5000 23:17:00 kafka | [2024-01-21 23:14:32,064] INFO Cluster ID = -jrszSKtSKq5TnXDeh3xeA (kafka.server.KafkaServer) 23:17:00 prometheus | ts=2024-01-21T23:14:29.717Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:17:00 policy-api | [2024-01-21T23:14:50.595+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:17:00 simulator | 2024-01-21 23:14:28,635 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,279] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:17:00 policy-pap | [2024-01-21T23:15:01.375+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738134194Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:17:00 policy-db-migrator | Preparing upgrade release version: 1200 23:17:00 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:00 kafka | [2024-01-21 23:14:32,068] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:17:00 prometheus | ts=2024-01-21T23:14:29.724Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:17:00 policy-api | [2024-01-21T23:14:50.612+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.126 seconds (process running for 12.724) 23:17:00 simulator | 2024-01-21 23:14:28,641 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,284] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:17:00 policy-pap | [2024-01-21T23:15:01.691+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738138094Z level=info msg=Target target=[all] 23:17:00 policy-db-migrator | Preparing upgrade release version: 1300 23:17:00 policy-apex-pdp | auto.offset.reset = latest 23:17:00 kafka | [2024-01-21 23:14:32,124] INFO KafkaConfig values: 23:17:00 prometheus | ts=2024-01-21T23:14:29.724Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=6.12µs 23:17:00 policy-api | [2024-01-21T23:15:07.412+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:17:00 simulator | 2024-01-21 23:14:28,730 INFO Session workerName=node0 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,293] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:17:00 policy-pap | allow.auto.create.topics = true 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738145864Z level=info msg="Path Home" path=/usr/share/grafana 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738149064Z level=info msg="Path Data" path=/var/lib/grafana 23:17:00 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:00 policy-db-migrator | Done 23:17:00 policy-db-migrator | name version 23:17:00 policy-api | [2024-01-21T23:15:07.412+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:17:00 policy-api | [2024-01-21T23:15:07.415+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,311] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:17:00 simulator | 2024-01-21 23:14:29,463 INFO Using GSON for REST calls 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738157444Z level=info msg="Path Logs" path=/var/log/grafana 23:17:00 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:17:00 policy-apex-pdp | check.crcs = true 23:17:00 prometheus | ts=2024-01-21T23:14:29.724Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:17:00 policy-db-migrator | policyadmin 0 23:17:00 policy-api | [2024-01-21T23:15:07.682+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:17:00 policy-pap | auto.commit.interval.ms = 5000 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,312] INFO Started @835ms (org.eclipse.jetty.server.Server) 23:17:00 simulator | 2024-01-21 23:14:29,536 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738160594Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:17:00 kafka | alter.config.policy.class.name = null 23:17:00 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:00 prometheus | ts=2024-01-21T23:14:29.725Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:17:00 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:17:00 policy-api | [] 23:17:00 policy-pap | auto.include.jmx.reporter = true 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,312] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:17:00 simulator | 2024-01-21 23:14:29,543 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738164734Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:17:00 kafka | alter.log.dirs.replication.quota.window.num = 11 23:17:00 policy-apex-pdp | client.id = consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-1 23:17:00 prometheus | ts=2024-01-21T23:14:29.725Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=109.951µs wal_replay_duration=489.655µs wbl_replay_duration=400ns total_replay_duration=856.708µs 23:17:00 policy-db-migrator | upgrade: 0 -> 1300 23:17:00 policy-pap | auto.offset.reset = latest 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,319] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:17:00 simulator | 2024-01-21 23:14:29,551 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1837ms 23:17:00 grafana | logger=settings t=2024-01-21T23:14:30.738170624Z level=info msg="App mode production" 23:17:00 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:17:00 policy-apex-pdp | client.rack = 23:17:00 prometheus | ts=2024-01-21T23:14:29.727Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC 23:17:00 policy-db-migrator | 23:17:00 policy-pap | bootstrap.servers = [kafka:9092] 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,320] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:17:00 simulator | 2024-01-21 23:14:29,552 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4083 ms. 23:17:00 grafana | logger=sqlstore t=2024-01-21T23:14:30.738444237Z level=info msg="Connecting to DB" dbtype=sqlite3 23:17:00 kafka | authorizer.class.name = 23:17:00 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:00 prometheus | ts=2024-01-21T23:14:29.727Z caller=main.go:1063 level=info msg="TSDB started" 23:17:00 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:17:00 policy-pap | check.crcs = true 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,321] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:17:00 simulator | 2024-01-21 23:14:29,561 INFO org.onap.policy.models.simulators starting SDNC simulator 23:17:00 grafana | logger=sqlstore t=2024-01-21T23:14:30.738463937Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:17:00 kafka | auto.create.topics.enable = true 23:17:00 policy-apex-pdp | default.api.timeout.ms = 60000 23:17:00 prometheus | ts=2024-01-21T23:14:29.727Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,322] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:17:00 simulator | 2024-01-21 23:14:29,574 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.739035233Z level=info msg="Starting DB migrations" 23:17:00 kafka | auto.include.jmx.reporter = true 23:17:00 policy-apex-pdp | enable.auto.commit = true 23:17:00 prometheus | ts=2024-01-21T23:14:29.728Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=885.889µs db_storage=1.69µs remote_storage=2.32µs web_handler=670ns query_engine=1.4µs scrape=217.172µs scrape_sd=98.811µs notify=28.811µs notify_sd=21.09µs rules=2.36µs tracing=12.7µs 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:17:00 policy-pap | client.id = consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-1 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,342] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:17:00 simulator | 2024-01-21 23:14:29,577 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.739909751Z level=info msg="Executing migration" id="create migration_log table" 23:17:00 kafka | auto.leader.rebalance.enable = true 23:17:00 policy-apex-pdp | exclude.internal.topics = true 23:17:00 prometheus | ts=2024-01-21T23:14:29.728Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | client.rack = 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,342] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:17:00 simulator | 2024-01-21 23:14:29,579 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.740624378Z level=info msg="Migration successfully executed" id="create migration_log table" duration=715.307µs 23:17:00 kafka | background.threads = 10 23:17:00 policy-apex-pdp | fetch.max.bytes = 52428800 23:17:00 prometheus | ts=2024-01-21T23:14:29.728Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:17:00 policy-db-migrator | 23:17:00 policy-pap | connections.max.idle.ms = 540000 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,346] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:17:00 simulator | 2024-01-21 23:14:29,580 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.744569627Z level=info msg="Executing migration" id="create user table" 23:17:00 kafka | broker.heartbeat.interval.ms = 2000 23:17:00 policy-apex-pdp | fetch.max.wait.ms = 500 23:17:00 policy-db-migrator | 23:17:00 policy-pap | default.api.timeout.ms = 60000 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,346] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:17:00 simulator | 2024-01-21 23:14:29,583 INFO Session workerName=node0 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.745025961Z level=info msg="Migration successfully executed" id="create user table" duration=456.114µs 23:17:00 kafka | broker.id = 1 23:17:00 policy-apex-pdp | fetch.min.bytes = 1 23:17:00 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:17:00 policy-pap | enable.auto.commit = true 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,350] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:17:00 simulator | 2024-01-21 23:14:29,642 INFO Using GSON for REST calls 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.750219592Z level=info msg="Executing migration" id="add unique index user.login" 23:17:00 kafka | broker.id.generation.enable = true 23:17:00 policy-apex-pdp | group.id = e43a1262-c2bd-4185-8b6c-0623a45ad046 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | exclude.internal.topics = true 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,350] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:00 simulator | 2024-01-21 23:14:29,651 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.75102187Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=802.558µs 23:17:00 kafka | broker.rack = null 23:17:00 policy-apex-pdp | group.instance.id = null 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:17:00 policy-pap | fetch.max.bytes = 52428800 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,358] INFO Snapshot loaded in 12 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:17:00 simulator | 2024-01-21 23:14:29,653 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.754734847Z level=info msg="Executing migration" id="add unique index user.email" 23:17:00 kafka | broker.session.timeout.ms = 9000 23:17:00 policy-apex-pdp | heartbeat.interval.ms = 3000 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | fetch.max.wait.ms = 500 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,359] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:00 simulator | 2024-01-21 23:14:29,653 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1939ms 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.755566465Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=827.458µs 23:17:00 kafka | client.quota.callback.class = null 23:17:00 policy-apex-pdp | interceptor.classes = [] 23:17:00 policy-db-migrator | 23:17:00 policy-pap | fetch.min.bytes = 1 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,359] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.759057099Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:17:00 simulator | 2024-01-21 23:14:29,653 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4925 ms. 23:17:00 simulator | 2024-01-21 23:14:29,654 INFO org.onap.policy.models.simulators starting SO simulator 23:17:00 policy-apex-pdp | internal.leave.group.on.close = true 23:17:00 policy-pap | group.id = 0096ba3d-86d0-4a50-8361-ec89b03a0194 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.760020018Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=962.849µs 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.766472242Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:17:00 kafka | compression.type = producer 23:17:00 simulator | 2024-01-21 23:14:29,658 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:00 policy-pap | group.instance.id = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.767136008Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=663.616µs 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.770857995Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:17:00 kafka | connection.failed.authentication.delay.ms = 100 23:17:00 simulator | 2024-01-21 23:14:29,659 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:17:00 policy-apex-pdp | isolation.level = read_uncommitted 23:17:00 policy-pap | heartbeat.interval.ms = 3000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.775686572Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.826457ms 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.785571669Z level=info msg="Executing migration" id="create user table v2" 23:17:00 kafka | connections.max.idle.ms = 600000 23:17:00 simulator | 2024-01-21 23:14:29,660 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 policy-pap | interceptor.classes = [] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.786323756Z level=info msg="Migration successfully executed" id="create user table v2" duration=752.347µs 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.792232095Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:17:00 kafka | connections.max.reauth.ms = 0 23:17:00 simulator | 2024-01-21 23:14:29,660 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:17:00 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:17:00 policy-pap | internal.leave.group.on.close = true 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.793290385Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.05804ms 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.797824369Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:17:00 kafka | control.plane.listener.name = null 23:17:00 simulator | 2024-01-21 23:14:29,668 INFO Session workerName=node0 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | max.poll.interval.ms = 300000 23:17:00 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.79892325Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.098821ms 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.802476925Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:17:00 kafka | controlled.shutdown.enable = true 23:17:00 simulator | 2024-01-21 23:14:29,755 INFO Using GSON for REST calls 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | max.poll.records = 500 23:17:00 policy-pap | isolation.level = read_uncommitted 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.803091121Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=614.066µs 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.809633395Z level=info msg="Executing migration" id="Drop old table user_v1" 23:17:00 kafka | controlled.shutdown.max.retries = 3 23:17:00 simulator | 2024-01-21 23:14:29,770 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:00 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.8101138Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=478.315µs 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.815218Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:17:00 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:17:00 simulator | 2024-01-21 23:14:29,771 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:17:00 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:17:00 policy-apex-pdp | metric.reporters = [] 23:17:00 policy-pap | max.partition.fetch.bytes = 1048576 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.816989727Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.770857ms 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.825507781Z level=info msg="Executing migration" id="Update user table charset" 23:17:00 kafka | controller.listener.names = null 23:17:00 simulator | 2024-01-21 23:14:29,771 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @2057ms 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | metrics.num.samples = 2 23:17:00 policy-pap | max.poll.interval.ms = 300000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.825536581Z level=info msg="Migration successfully executed" id="Update user table charset" duration=29.95µs 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.832330098Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:17:00 kafka | controller.quorum.append.linger.ms = 25 23:17:00 simulator | 2024-01-21 23:14:29,771 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4888 ms. 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:17:00 policy-apex-pdp | metrics.recording.level = INFO 23:17:00 policy-pap | max.poll.records = 500 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.833430859Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.100691ms 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.840091194Z level=info msg="Executing migration" id="Add missing user data" 23:17:00 kafka | controller.quorum.election.backoff.max.ms = 1000 23:17:00 simulator | 2024-01-21 23:14:29,772 INFO org.onap.policy.models.simulators starting VFC simulator 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:00 policy-pap | metadata.max.age.ms = 300000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.840295056Z level=info msg="Migration successfully executed" id="Add missing user data" duration=206.702µs 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.845202434Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:17:00 kafka | controller.quorum.election.timeout.ms = 1000 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,369] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:00 policy-pap | metric.reporters = [] 23:17:00 simulator | 2024-01-21 23:14:29,775 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:00 kafka | controller.quorum.fetch.timeout.ms = 2000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.846371896Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.168372ms 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.85090767Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | receive.buffer.bytes = 65536 23:17:00 policy-pap | metrics.num.samples = 2 23:17:00 simulator | 2024-01-21 23:14:29,775 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 kafka | controller.quorum.request.timeout.ms = 2000 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,370] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.852166042Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.258492ms 23:17:00 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:17:00 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:00 policy-pap | metrics.recording.level = INFO 23:17:00 simulator | 2024-01-21 23:14:29,776 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 kafka | controller.quorum.retry.backoff.ms = 20 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,409] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.859351673Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:00 policy-pap | metrics.sample.window.ms = 30000 23:17:00 simulator | 2024-01-21 23:14:29,777 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 23:17:00 kafka | controller.quorum.voters = [] 23:17:00 zookeeper_1 | [2024-01-21 23:14:29,410] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.86112435Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.772167ms 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:17:00 policy-apex-pdp | request.timeout.ms = 30000 23:17:00 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:00 simulator | 2024-01-21 23:14:29,781 INFO Session workerName=node0 23:17:00 kafka | controller.quota.window.num = 11 23:17:00 zookeeper_1 | [2024-01-21 23:14:30,439] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.86618284Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | retry.backoff.ms = 100 23:17:00 policy-pap | receive.buffer.bytes = 65536 23:17:00 simulator | 2024-01-21 23:14:29,830 INFO Using GSON for REST calls 23:17:00 kafka | controller.quota.window.size.seconds = 1 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.879063546Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.880716ms 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:00 policy-pap | reconnect.backoff.max.ms = 1000 23:17:00 simulator | 2024-01-21 23:14:29,840 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} 23:17:00 kafka | controller.socket.timeout.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.883305418Z level=info msg="Executing migration" id="create temp user table v1-7" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.jaas.config = null 23:17:00 policy-pap | reconnect.backoff.ms = 50 23:17:00 simulator | 2024-01-21 23:14:29,842 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.884822883Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.519125ms 23:17:00 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:17:00 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:00 policy-pap | request.timeout.ms = 30000 23:17:00 kafka | create.topic.policy.class.name = null 23:17:00 simulator | 2024-01-21 23:14:29,842 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @2127ms 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.891172655Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:00 policy-pap | retry.backoff.ms = 100 23:17:00 kafka | default.replication.factor = 1 23:17:00 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.892397817Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.225202ms 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:17:00 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:00 policy-pap | sasl.client.callback.handler.class = null 23:17:00 simulator | 2024-01-21 23:14:29,842 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4933 ms. 23:17:00 kafka | delegation.token.expiry.time.ms = 86400000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.897616679Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:00 policy-pap | sasl.jaas.config = null 23:17:00 simulator | 2024-01-21 23:14:29,843 INFO org.onap.policy.models.simulators started 23:17:00 kafka | delegation.token.master.key = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.898294055Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=677.386µs 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:00 kafka | delegation.token.max.lifetime.ms = 604800000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.906319804Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:00 kafka | delegation.token.secret.key = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.907562966Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.243162ms 23:17:00 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:17:00 policy-apex-pdp | sasl.login.class = null 23:17:00 policy-pap | sasl.kerberos.service.name = null 23:17:00 kafka | delete.records.purgatory.purge.interval.requests = 1 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.914707936Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:00 kafka | delete.topic.enable = true 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.915687056Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=978.87µs 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:00 kafka | early.start.listeners = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.920605304Z level=info msg="Executing migration" id="Update temp_user table charset" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:00 policy-pap | sasl.login.callback.handler.class = null 23:17:00 kafka | fetch.max.bytes = 57671680 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.920636384Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=32.01µs 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:00 kafka | fetch.purgatory.purge.interval.requests = 1000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.923553203Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:00 policy-pap | sasl.login.class = null 23:17:00 kafka | group.consumer.assignors = [] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.924034178Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=480.945µs 23:17:00 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:17:00 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:00 policy-pap | sasl.login.connect.timeout.ms = null 23:17:00 kafka | group.consumer.heartbeat.interval.ms = 5000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.926971307Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:00 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:17:00 policy-pap | sasl.login.read.timeout.ms = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.927544412Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=572.456µs 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:17:00 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:00 kafka | group.consumer.max.session.timeout.ms = 60000 23:17:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.932495371Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:00 kafka | group.consumer.max.size = 2147483647 23:17:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.933086826Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=591.345µs 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:00 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:17:00 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.936984185Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:00 kafka | group.consumer.min.session.timeout.ms = 45000 23:17:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.937909484Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=925.299µs 23:17:00 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:17:00 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:00 kafka | group.consumer.session.timeout.ms = 45000 23:17:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.94161921Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:00 kafka | group.coordinator.new.enable = false 23:17:00 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.945862462Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.243512ms 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:00 kafka | group.coordinator.threads = 1 23:17:00 policy-pap | sasl.mechanism = GSSAPI 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.950575208Z level=info msg="Executing migration" id="create temp_user v2" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:00 kafka | group.initial.rebalance.delay.ms = 3000 23:17:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.951296525Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=721.027µs 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:00 kafka | group.max.session.timeout.ms = 1800000 23:17:00 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.954965111Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:00 kafka | group.max.size = 2147483647 23:17:00 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.955650178Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=684.727µs 23:17:00 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:17:00 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:00 kafka | group.min.session.timeout.ms = 6000 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.960282694Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:00 kafka | initial.broker.registration.timeout.ms = 60000 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.961376414Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.09309ms 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:00 kafka | inter.broker.listener.name = PLAINTEXT 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.964846138Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | security.providers = null 23:17:00 kafka | inter.broker.protocol.version = 3.5-IV2 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.965924829Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.078551ms 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | send.buffer.bytes = 131072 23:17:00 kafka | kafka.metrics.polling.interval.secs = 10 23:17:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.970642905Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | session.timeout.ms = 45000 23:17:00 kafka | kafka.metrics.reporters = [] 23:17:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.971358622Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=715.447µs 23:17:00 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:17:00 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:00 kafka | leader.imbalance.check.interval.seconds = 300 23:17:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.974646294Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:00 kafka | leader.imbalance.per.broker.percentage = 10 23:17:00 policy-pap | security.protocol = PLAINTEXT 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.975053358Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=406.984µs 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | ssl.cipher.suites = null 23:17:00 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:17:00 policy-pap | security.providers = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.978623343Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:00 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:17:00 policy-pap | send.buffer.bytes = 131072 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.979517922Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=900.949µs 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:00 kafka | log.cleaner.backoff.ms = 15000 23:17:00 policy-pap | session.timeout.ms = 45000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.984035206Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | ssl.engine.factory.class = null 23:17:00 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:17:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.984469701Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=434.274µs 23:17:00 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:17:00 policy-apex-pdp | ssl.key.password = null 23:17:00 kafka | log.cleaner.delete.retention.ms = 86400000 23:17:00 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.987702052Z level=info msg="Executing migration" id="create star table" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:00 kafka | log.cleaner.enable = true 23:17:00 policy-pap | ssl.cipher.suites = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.988401319Z level=info msg="Migration successfully executed" id="create star table" duration=694.207µs 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:00 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:17:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.991643041Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.keystore.key = null 23:17:00 kafka | log.cleaner.io.buffer.size = 524288 23:17:00 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.992870843Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.226462ms 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | ssl.keystore.location = null 23:17:00 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:17:00 policy-pap | ssl.engine.factory.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.998382767Z level=info msg="Executing migration" id="create org table v1" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | ssl.keystore.password = null 23:17:00 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:17:00 policy-pap | ssl.key.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:30.999382257Z level=info msg="Migration successfully executed" id="create org table v1" duration=999.23µs 23:17:00 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:17:00 policy-apex-pdp | ssl.keystore.type = JKS 23:17:00 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:17:00 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.004630748Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:00 kafka | log.cleaner.min.compaction.lag.ms = 0 23:17:00 policy-pap | ssl.keystore.certificate.chain = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.005363606Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=732.158µs 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | ssl.provider = null 23:17:00 kafka | log.cleaner.threads = 1 23:17:00 policy-pap | ssl.keystore.key = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.008660078Z level=info msg="Executing migration" id="create org_user table v1" 23:17:00 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:00 kafka | log.cleanup.policy = [delete] 23:17:00 policy-pap | ssl.keystore.location = null 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.009681998Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.02134ms 23:17:00 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:00 kafka | log.dir = /tmp/kafka-logs 23:17:00 policy-pap | ssl.keystore.password = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.012876709Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:17:00 policy-apex-pdp | ssl.truststore.certificates = null 23:17:00 kafka | log.dirs = /var/lib/kafka/data 23:17:00 policy-pap | ssl.keystore.type = JKS 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.0140897Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.212722ms 23:17:00 policy-apex-pdp | ssl.truststore.location = null 23:17:00 kafka | log.flush.interval.messages = 9223372036854775807 23:17:00 policy-pap | ssl.protocol = TLSv1.3 23:17:00 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:17:00 policy-apex-pdp | ssl.truststore.password = null 23:17:00 policy-pap | ssl.provider = null 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.026303009Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:17:00 kafka | log.flush.interval.ms = null 23:17:00 policy-apex-pdp | ssl.truststore.type = JKS 23:17:00 policy-pap | ssl.secure.random.implementation = null 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.028240257Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.942488ms 23:17:00 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:17:00 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.031852662Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:17:00 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:17:00 policy-apex-pdp | 23:17:00 policy-pap | ssl.truststore.certificates = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.03264063Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=787.638µs 23:17:00 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.127+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.035698849Z level=info msg="Executing migration" id="Update org table charset" 23:17:00 kafka | log.index.interval.bytes = 4096 23:17:00 policy-pap | ssl.truststore.location = null 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.127+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.03572684Z level=info msg="Migration successfully executed" id="Update org table charset" duration=28.371µs 23:17:00 kafka | log.index.size.max.bytes = 10485760 23:17:00 policy-pap | ssl.truststore.password = null 23:17:00 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.038722479Z level=info msg="Executing migration" id="Update org_user table charset" 23:17:00 kafka | log.message.downconversion.enable = true 23:17:00 policy-pap | ssl.truststore.type = JKS 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.127+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878905125 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.038747769Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=25.37µs 23:17:00 kafka | log.message.format.version = 3.0-IV1 23:17:00 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.130+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-1, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Subscribed to topic(s): policy-pdp-pap 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.043313493Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:17:00 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:17:00 policy-pap | 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.144+00:00|INFO|ServiceManager|main] service manager starting 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.043489085Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=188.872µs 23:17:00 kafka | log.message.timestamp.type = CreateTime 23:17:00 policy-pap | [2024-01-21T23:15:01.886+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.144+00:00|INFO|ServiceManager|main] service manager starting topics 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.047429013Z level=info msg="Executing migration" id="create dashboard table" 23:17:00 kafka | log.preallocate = false 23:17:00 policy-pap | [2024-01-21T23:15:01.887+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.150+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e43a1262-c2bd-4185-8b6c-0623a45ad046, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.048468683Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.03906ms 23:17:00 kafka | log.retention.bytes = -1 23:17:00 policy-pap | [2024-01-21T23:15:01.887+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878901884 23:17:00 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.176+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.052113769Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:17:00 kafka | log.retention.check.interval.ms = 300000 23:17:00 policy-pap | [2024-01-21T23:15:01.890+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-1, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Subscribed to topic(s): policy-pdp-pap 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | allow.auto.create.topics = true 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.052881056Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=766.717µs 23:17:00 kafka | log.retention.hours = 168 23:17:00 policy-pap | [2024-01-21T23:15:01.891+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | auto.commit.interval.ms = 5000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.055890095Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:17:00 kafka | log.retention.minutes = null 23:17:00 policy-pap | allow.auto.create.topics = true 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.056732833Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=839.488µs 23:17:00 kafka | log.retention.ms = null 23:17:00 policy-pap | auto.commit.interval.ms = 5000 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | auto.offset.reset = latest 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.062189086Z level=info msg="Executing migration" id="create dashboard_tag table" 23:17:00 kafka | log.roll.hours = 168 23:17:00 policy-pap | auto.include.jmx.reporter = true 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.063325047Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.135911ms 23:17:00 kafka | log.roll.jitter.hours = 0 23:17:00 policy-pap | auto.offset.reset = latest 23:17:00 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:17:00 policy-apex-pdp | check.crcs = true 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.067344485Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:17:00 kafka | log.roll.jitter.ms = null 23:17:00 policy-pap | bootstrap.servers = [kafka:9092] 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.068596637Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.251352ms 23:17:00 kafka | log.roll.ms = null 23:17:00 policy-pap | check.crcs = true 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 policy-apex-pdp | client.id = consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.072267323Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:17:00 kafka | log.segment.bytes = 1073741824 23:17:00 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | client.rack = 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.074426394Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=2.158261ms 23:17:00 kafka | log.segment.delete.delay.ms = 60000 23:17:00 policy-pap | client.id = consumer-policy-pap-2 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.079464232Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:17:00 kafka | max.connection.creation.rate = 2147483647 23:17:00 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:00 policy-pap | client.rack = 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.086750393Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.284631ms 23:17:00 kafka | max.connections = 2147483647 23:17:00 policy-apex-pdp | default.api.timeout.ms = 60000 23:17:00 policy-pap | connections.max.idle.ms = 540000 23:17:00 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.090408109Z level=info msg="Executing migration" id="create dashboard v2" 23:17:00 kafka | max.connections.per.ip = 2147483647 23:17:00 policy-apex-pdp | enable.auto.commit = true 23:17:00 policy-pap | default.api.timeout.ms = 60000 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.091339718Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=931.439µs 23:17:00 policy-apex-pdp | exclude.internal.topics = true 23:17:00 policy-pap | enable.auto.commit = true 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 kafka | max.connections.per.ip.overrides = 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.094480558Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:17:00 policy-apex-pdp | fetch.max.bytes = 52428800 23:17:00 policy-pap | exclude.internal.topics = true 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | max.incremental.fetch.session.cache.slots = 1000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.095253105Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=772.527µs 23:17:00 policy-apex-pdp | fetch.max.wait.ms = 500 23:17:00 policy-pap | fetch.max.bytes = 52428800 23:17:00 policy-db-migrator | 23:17:00 kafka | message.max.bytes = 1048588 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.100512457Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:17:00 policy-apex-pdp | fetch.min.bytes = 1 23:17:00 policy-pap | fetch.max.wait.ms = 500 23:17:00 policy-db-migrator | 23:17:00 kafka | metadata.log.dir = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.101705208Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.192261ms 23:17:00 policy-apex-pdp | group.id = e43a1262-c2bd-4185-8b6c-0623a45ad046 23:17:00 policy-pap | fetch.min.bytes = 1 23:17:00 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:17:00 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.10497721Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:17:00 policy-apex-pdp | group.instance.id = null 23:17:00 policy-pap | group.id = policy-pap 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.105290663Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=313.293µs 23:17:00 policy-apex-pdp | heartbeat.interval.ms = 3000 23:17:00 policy-pap | group.instance.id = null 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 kafka | metadata.log.segment.bytes = 1073741824 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.108786286Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:17:00 policy-apex-pdp | interceptor.classes = [] 23:17:00 policy-pap | heartbeat.interval.ms = 3000 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | metadata.log.segment.min.bytes = 8388608 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.109631305Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=845.259µs 23:17:00 policy-apex-pdp | internal.leave.group.on.close = true 23:17:00 policy-pap | interceptor.classes = [] 23:17:00 policy-db-migrator | 23:17:00 kafka | metadata.log.segment.ms = 604800000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.11427579Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:17:00 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:00 policy-pap | internal.leave.group.on.close = true 23:17:00 policy-db-migrator | 23:17:00 kafka | metadata.max.idle.interval.ms = 500 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.11434043Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=65.14µs 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.116824335Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:17:00 policy-apex-pdp | isolation.level = read_uncommitted 23:17:00 kafka | metadata.max.retention.bytes = 104857600 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.118574941Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.750386ms 23:17:00 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 kafka | metadata.max.retention.ms = 604800000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.12259607Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:00 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:17:00 kafka | metric.reporters = [] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.124937123Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.340353ms 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | isolation.level = read_uncommitted 23:17:00 policy-apex-pdp | max.poll.interval.ms = 300000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.128881891Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:17:00 kafka | metrics.num.samples = 2 23:17:00 policy-db-migrator | 23:17:00 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 policy-apex-pdp | max.poll.records = 500 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.130671189Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.789008ms 23:17:00 kafka | metrics.recording.level = INFO 23:17:00 policy-db-migrator | 23:17:00 policy-pap | max.partition.fetch.bytes = 1048576 23:17:00 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.135231363Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:17:00 kafka | metrics.sample.window.ms = 30000 23:17:00 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:17:00 policy-pap | max.poll.interval.ms = 300000 23:17:00 policy-apex-pdp | metric.reporters = [] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.13601406Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=781.837µs 23:17:00 kafka | min.insync.replicas = 1 23:17:00 policy-pap | max.poll.records = 500 23:17:00 policy-apex-pdp | metrics.num.samples = 2 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.139559195Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:17:00 kafka | node.id = 1 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | metadata.max.age.ms = 300000 23:17:00 policy-apex-pdp | metrics.recording.level = INFO 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.142401652Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.844557ms 23:17:00 kafka | num.io.threads = 8 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:17:00 policy-pap | metric.reporters = [] 23:17:00 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.145589493Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:17:00 kafka | num.network.threads = 3 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | metrics.num.samples = 2 23:17:00 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:00 kafka | num.partitions = 1 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.146377251Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=787.218µs 23:17:00 policy-pap | metrics.recording.level = INFO 23:17:00 policy-apex-pdp | receive.buffer.bytes = 65536 23:17:00 kafka | num.recovery.threads.per.data.dir = 1 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.150881414Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:17:00 policy-pap | metrics.sample.window.ms = 30000 23:17:00 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:00 kafka | num.replica.alter.log.dirs.threads = null 23:17:00 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.151696372Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=810.628µs 23:17:00 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:00 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:00 kafka | num.replica.fetchers = 1 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.155355788Z level=info msg="Executing migration" id="Update dashboard table charset" 23:17:00 policy-pap | receive.buffer.bytes = 65536 23:17:00 policy-apex-pdp | request.timeout.ms = 30000 23:17:00 kafka | offset.metadata.max.bytes = 4096 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.155398248Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=44.16µs 23:17:00 policy-pap | reconnect.backoff.max.ms = 1000 23:17:00 policy-apex-pdp | retry.backoff.ms = 100 23:17:00 kafka | offsets.commit.required.acks = -1 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.159374327Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:17:00 policy-pap | reconnect.backoff.ms = 50 23:17:00 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:00 kafka | offsets.commit.timeout.ms = 5000 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.159416247Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=43.61µs 23:17:00 policy-pap | request.timeout.ms = 30000 23:17:00 policy-apex-pdp | sasl.jaas.config = null 23:17:00 kafka | offsets.load.buffer.size = 5242880 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.164308364Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:17:00 policy-pap | retry.backoff.ms = 100 23:17:00 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:00 kafka | offsets.retention.check.interval.ms = 600000 23:17:00 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.167900829Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.950389ms 23:17:00 policy-pap | sasl.client.callback.handler.class = null 23:17:00 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:00 kafka | offsets.retention.minutes = 10080 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.173024669Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:17:00 policy-pap | sasl.jaas.config = null 23:17:00 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:00 kafka | offsets.topic.compression.codec = 0 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.174856517Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.831918ms 23:17:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:00 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:00 kafka | offsets.topic.num.partitions = 50 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.178142019Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:17:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:00 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:00 kafka | offsets.topic.replication.factor = 1 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.180067687Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.898758ms 23:17:00 policy-pap | sasl.kerberos.service.name = null 23:17:00 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:00 kafka | offsets.topic.segment.bytes = 104857600 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.185134156Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:17:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:00 policy-apex-pdp | sasl.login.class = null 23:17:00 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:17:00 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.187141565Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.006479ms 23:17:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:00 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:00 kafka | password.encoder.iterations = 4096 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.191307586Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:17:00 policy-pap | sasl.login.callback.handler.class = null 23:17:00 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:00 kafka | password.encoder.key.length = 128 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.191492738Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=182.692µs 23:17:00 policy-pap | sasl.login.class = null 23:17:00 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:00 kafka | password.encoder.keyfactory.algorithm = null 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.195297385Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:17:00 policy-pap | sasl.login.connect.timeout.ms = null 23:17:00 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:00 kafka | password.encoder.old.secret = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.196156343Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=856.728µs 23:17:00 policy-pap | sasl.login.read.timeout.ms = null 23:17:00 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:00 kafka | password.encoder.secret = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.201586165Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:17:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:00 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:00 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:17:00 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.202678066Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.091391ms 23:17:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:00 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:00 kafka | process.roles = [] 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.205952388Z level=info msg="Executing migration" id="Update dashboard title length" 23:17:00 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:00 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:00 kafka | producer.id.expiration.check.interval.ms = 600000 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.205979398Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=27.16µs 23:17:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:00 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:00 kafka | producer.id.expiration.ms = 86400000 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.208433342Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:17:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:00 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:00 kafka | producer.purgatory.purge.interval.requests = 1000 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.209193949Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=759.777µs 23:17:00 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:00 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:00 kafka | queued.max.request.bytes = -1 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.214512671Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:17:00 policy-pap | sasl.mechanism = GSSAPI 23:17:00 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:00 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.215187217Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=674.106µs 23:17:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:00 kafka | queued.max.requests = 500 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.218239857Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:17:00 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:00 kafka | quota.window.num = 11 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.228459726Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=10.217719ms 23:17:00 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:00 kafka | quota.window.size.seconds = 1 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.231919079Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:17:00 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.232412534Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=495.055µs 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:00 kafka | remote.log.manager.task.interval.ms = 30000 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.235706746Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:00 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:00 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:17:00 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.236415203Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=707.857µs 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:00 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:00 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.240849006Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:00 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:00 kafka | remote.log.manager.task.retry.jitter = 0.2 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.241634404Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=784.638µs 23:17:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:00 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:00 kafka | remote.log.manager.thread.pool.size = 10 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.244949346Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:17:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:00 policy-apex-pdp | security.providers = null 23:17:00 kafka | remote.log.metadata.manager.class.name = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.2454007Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=451.184µs 23:17:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:00 policy-apex-pdp | send.buffer.bytes = 131072 23:17:00 kafka | remote.log.metadata.manager.class.path = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.249845313Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:17:00 policy-pap | security.protocol = PLAINTEXT 23:17:00 policy-apex-pdp | session.timeout.ms = 45000 23:17:00 kafka | remote.log.metadata.manager.impl.prefix = null 23:17:00 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.250679371Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=831.078µs 23:17:00 policy-pap | security.providers = null 23:17:00 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:00 kafka | remote.log.metadata.manager.listener.name = null 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.255643479Z level=info msg="Executing migration" id="Add check_sum column" 23:17:00 policy-pap | send.buffer.bytes = 131072 23:17:00 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:00 kafka | remote.log.reader.max.pending.tasks = 100 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.257628968Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.987829ms 23:17:00 policy-pap | session.timeout.ms = 45000 23:17:00 policy-apex-pdp | ssl.cipher.suites = null 23:17:00 kafka | remote.log.reader.threads = 10 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.260738758Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:17:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:00 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:00 kafka | remote.log.storage.manager.class.name = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.261513326Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=773.488µs 23:17:00 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:00 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:00 kafka | remote.log.storage.manager.class.path = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.264536465Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:17:00 policy-pap | ssl.cipher.suites = null 23:17:00 policy-apex-pdp | ssl.engine.factory.class = null 23:17:00 kafka | remote.log.storage.manager.impl.prefix = null 23:17:00 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.264711117Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=174.242µs 23:17:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:00 policy-apex-pdp | ssl.key.password = null 23:17:00 kafka | remote.log.storage.system.enable = false 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.269281421Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:17:00 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:00 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:00 kafka | replica.fetch.backoff.ms = 1000 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.269552614Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=270.503µs 23:17:00 policy-pap | ssl.engine.factory.class = null 23:17:00 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:00 kafka | replica.fetch.max.bytes = 1048576 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.272863996Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:17:00 policy-pap | ssl.key.password = null 23:17:00 policy-apex-pdp | ssl.keystore.key = null 23:17:00 kafka | replica.fetch.min.bytes = 1 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.274062088Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.197492ms 23:17:00 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:00 policy-apex-pdp | ssl.keystore.location = null 23:17:00 kafka | replica.fetch.response.max.bytes = 10485760 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.277649832Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:17:00 policy-pap | ssl.keystore.certificate.chain = null 23:17:00 policy-apex-pdp | ssl.keystore.password = null 23:17:00 kafka | replica.fetch.wait.max.ms = 500 23:17:00 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.279915174Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.265242ms 23:17:00 policy-pap | ssl.keystore.key = null 23:17:00 policy-apex-pdp | ssl.keystore.type = JKS 23:17:00 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.284489499Z level=info msg="Executing migration" id="create data_source table" 23:17:00 policy-pap | ssl.keystore.location = null 23:17:00 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:00 kafka | replica.lag.time.max.ms = 30000 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.285337307Z level=info msg="Migration successfully executed" id="create data_source table" duration=847.098µs 23:17:00 policy-pap | ssl.keystore.password = null 23:17:00 policy-apex-pdp | ssl.provider = null 23:17:00 kafka | replica.selector.class = null 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.289180274Z level=info msg="Executing migration" id="add index data_source.account_id" 23:17:00 policy-pap | ssl.keystore.type = JKS 23:17:00 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:00 kafka | replica.socket.receive.buffer.bytes = 65536 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.289954742Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=770.467µs 23:17:00 policy-pap | ssl.protocol = TLSv1.3 23:17:00 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:00 kafka | replica.socket.timeout.ms = 30000 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.293295354Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:17:00 policy-pap | ssl.provider = null 23:17:00 policy-apex-pdp | ssl.truststore.certificates = null 23:17:00 kafka | replication.quota.window.num = 11 23:17:00 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.294093772Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=797.777µs 23:17:00 policy-pap | ssl.secure.random.implementation = null 23:17:00 policy-apex-pdp | ssl.truststore.location = null 23:17:00 kafka | replication.quota.window.size.seconds = 1 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.299526024Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:17:00 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:00 policy-apex-pdp | ssl.truststore.password = null 23:17:00 kafka | request.timeout.ms = 30000 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.300273272Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=746.658µs 23:17:00 policy-pap | ssl.truststore.certificates = null 23:17:00 policy-apex-pdp | ssl.truststore.type = JKS 23:17:00 kafka | reserved.broker.max.id = 1000 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.303462622Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:17:00 policy-pap | ssl.truststore.location = null 23:17:00 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 kafka | sasl.client.callback.handler.class = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.304329221Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=864.819µs 23:17:00 policy-pap | ssl.truststore.password = null 23:17:00 policy-apex-pdp | 23:17:00 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.307521632Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:17:00 policy-pap | ssl.truststore.type = JKS 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.187+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:00 kafka | sasl.jaas.config = null 23:17:00 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.317112475Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.590643ms 23:17:00 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.187+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:00 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.321967622Z level=info msg="Executing migration" id="create data_source table v2" 23:17:00 policy-pap | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.187+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878905187 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.322743039Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=774.647µs 23:17:00 policy-pap | [2024-01-21T23:15:01.898+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:00 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.188+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Subscribed to topic(s): policy-pdp-pap 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.32595031Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:17:00 policy-pap | [2024-01-21T23:15:01.898+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:00 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.189+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0644ab6b-245c-4d68-8e2b-62e7f136f852, alive=false, publisher=null]]: starting 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.326742208Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=791.268µs 23:17:00 policy-pap | [2024-01-21T23:15:01.898+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878901898 23:17:00 kafka | sasl.kerberos.service.name = null 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.202+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.33002589Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:17:00 policy-pap | [2024-01-21T23:15:01.898+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:17:00 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:00 policy-apex-pdp | acks = -1 23:17:00 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.330828648Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=802.328µs 23:17:00 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:00 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:00 policy-pap | [2024-01-21T23:15:02.249+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.335795246Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:17:00 policy-apex-pdp | batch.size = 16384 23:17:00 policy-pap | [2024-01-21T23:15:02.472+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.336563243Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=763.337µs 23:17:00 kafka | sasl.login.callback.handler.class = null 23:17:00 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:00 policy-pap | [2024-01-21T23:15:02.767+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6fafbdac, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@c7c07ff, org.springframework.security.web.context.SecurityContextHolderFilter@5dc120ab, org.springframework.security.web.header.HeaderWriterFilter@750c23a3, org.springframework.security.web.authentication.logout.LogoutFilter@581d5b33, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3909308c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7ef7f414, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4c3d72fd, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@d271d6c, org.springframework.security.web.access.ExceptionTranslationFilter@5bf1b528, org.springframework.security.web.access.intercept.AuthorizationFilter@90394d] 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.340581412Z level=info msg="Executing migration" id="Add column with_credentials" 23:17:00 kafka | sasl.login.class = null 23:17:00 policy-apex-pdp | buffer.memory = 33554432 23:17:00 policy-pap | [2024-01-21T23:15:03.668+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.34450973Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.928408ms 23:17:00 kafka | sasl.login.connect.timeout.ms = null 23:17:00 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:00 policy-pap | [2024-01-21T23:15:03.737+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.347889493Z level=info msg="Executing migration" id="Add secure json data column" 23:17:00 kafka | sasl.login.read.timeout.ms = null 23:17:00 policy-apex-pdp | client.id = producer-1 23:17:00 policy-pap | [2024-01-21T23:15:03.779+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:17:00 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.350161155Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.271212ms 23:17:00 kafka | sasl.login.refresh.buffer.seconds = 300 23:17:00 policy-apex-pdp | compression.type = none 23:17:00 policy-pap | [2024-01-21T23:15:03.800+00:00|INFO|ServiceManager|main] Policy PAP starting 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.355054302Z level=info msg="Executing migration" id="Update data_source table charset" 23:17:00 kafka | sasl.login.refresh.min.period.seconds = 60 23:17:00 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:00 policy-pap | [2024-01-21T23:15:03.800+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.355146133Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=93.141µs 23:17:00 kafka | sasl.login.refresh.window.factor = 0.8 23:17:00 policy-apex-pdp | delivery.timeout.ms = 120000 23:17:00 policy-pap | [2024-01-21T23:15:03.801+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.358487675Z level=info msg="Executing migration" id="Update initial version to 1" 23:17:00 kafka | sasl.login.refresh.window.jitter = 0.05 23:17:00 policy-apex-pdp | enable.idempotence = true 23:17:00 policy-pap | [2024-01-21T23:15:03.802+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.358784488Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=297.033µs 23:17:00 policy-apex-pdp | interceptor.classes = [] 23:17:00 policy-pap | [2024-01-21T23:15:03.802+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:17:00 policy-db-migrator | 23:17:00 kafka | sasl.login.retry.backoff.max.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.362690256Z level=info msg="Executing migration" id="Add read_only data column" 23:17:00 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:00 policy-pap | [2024-01-21T23:15:03.802+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:17:00 policy-db-migrator | 23:17:00 kafka | sasl.login.retry.backoff.ms = 100 23:17:00 policy-apex-pdp | linger.ms = 0 23:17:00 policy-pap | [2024-01-21T23:15:03.802+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:17:00 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:17:00 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.365642595Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.953149ms 23:17:00 policy-apex-pdp | max.block.ms = 60000 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.37034191Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:17:00 policy-pap | [2024-01-21T23:15:03.809+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0096ba3d-86d0-4a50-8361-ec89b03a0194, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5c65fa69 23:17:00 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:00 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.370516102Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=173.452µs 23:17:00 policy-pap | [2024-01-21T23:15:03.822+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0096ba3d-86d0-4a50-8361-ec89b03a0194, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:00 policy-apex-pdp | max.request.size = 1048576 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | sasl.oauthbearer.expected.audience = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.373725893Z level=info msg="Executing migration" id="Update json_data with nulls" 23:17:00 policy-pap | [2024-01-21T23:15:03.823+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:00 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:00 policy-db-migrator | 23:17:00 kafka | sasl.oauthbearer.expected.issuer = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.373882415Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=156.792µs 23:17:00 policy-pap | allow.auto.create.topics = true 23:17:00 policy-apex-pdp | metadata.max.idle.ms = 300000 23:17:00 policy-db-migrator | 23:17:00 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.376262128Z level=info msg="Executing migration" id="Add uid column" 23:17:00 policy-pap | auto.commit.interval.ms = 5000 23:17:00 policy-apex-pdp | metric.reporters = [] 23:17:00 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:17:00 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.37852071Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.257882ms 23:17:00 policy-pap | auto.include.jmx.reporter = true 23:17:00 policy-apex-pdp | metrics.num.samples = 2 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.381993893Z level=info msg="Executing migration" id="Update uid value" 23:17:00 policy-pap | auto.offset.reset = latest 23:17:00 policy-apex-pdp | metrics.recording.level = INFO 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:17:00 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.382176315Z level=info msg="Migration successfully executed" id="Update uid value" duration=183.782µs 23:17:00 policy-pap | bootstrap.servers = [kafka:9092] 23:17:00 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | sasl.oauthbearer.scope.claim.name = scope 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.38685852Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:17:00 policy-pap | check.crcs = true 23:17:00 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:17:00 policy-db-migrator | 23:17:00 kafka | sasl.oauthbearer.sub.claim.name = sub 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.389473645Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=2.612445ms 23:17:00 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:00 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:17:00 policy-db-migrator | 23:17:00 kafka | sasl.oauthbearer.token.endpoint.url = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.394725917Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:17:00 policy-pap | client.id = consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3 23:17:00 policy-apex-pdp | partitioner.class = null 23:17:00 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:17:00 kafka | sasl.server.callback.handler.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.396030379Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.303653ms 23:17:00 policy-pap | client.rack = 23:17:00 policy-apex-pdp | partitioner.ignore.keys = false 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | sasl.server.max.receive.size = 524288 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.399545683Z level=info msg="Executing migration" id="create api_key table" 23:17:00 policy-pap | connections.max.idle.ms = 540000 23:17:00 policy-apex-pdp | receive.buffer.bytes = 32768 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:00 kafka | security.inter.broker.protocol = PLAINTEXT 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.40030916Z level=info msg="Migration successfully executed" id="create api_key table" duration=765.157µs 23:17:00 policy-pap | default.api.timeout.ms = 60000 23:17:00 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | security.providers = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.405995366Z level=info msg="Executing migration" id="add index api_key.account_id" 23:17:00 policy-pap | enable.auto.commit = true 23:17:00 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:00 policy-db-migrator | 23:17:00 kafka | server.max.startup.time.ms = 9223372036854775807 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.407206728Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.210751ms 23:17:00 policy-pap | exclude.internal.topics = true 23:17:00 policy-apex-pdp | request.timeout.ms = 30000 23:17:00 policy-db-migrator | 23:17:00 kafka | socket.connection.setup.timeout.max.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.410784912Z level=info msg="Executing migration" id="add index api_key.key" 23:17:00 policy-pap | fetch.max.bytes = 52428800 23:17:00 policy-apex-pdp | retries = 2147483647 23:17:00 policy-db-migrator | > upgrade 0470-pdp.sql 23:17:00 kafka | socket.connection.setup.timeout.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.412150685Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.366173ms 23:17:00 policy-pap | fetch.max.wait.ms = 500 23:17:00 policy-apex-pdp | retry.backoff.ms = 100 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | socket.listen.backlog.size = 50 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.415892731Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:17:00 policy-pap | fetch.min.bytes = 1 23:17:00 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:00 kafka | socket.receive.buffer.bytes = 102400 23:17:00 policy-pap | group.id = 0096ba3d-86d0-4a50-8361-ec89b03a0194 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | socket.request.max.bytes = 104857600 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.41672954Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=836.269µs 23:17:00 policy-apex-pdp | sasl.jaas.config = null 23:17:00 policy-pap | group.instance.id = null 23:17:00 policy-db-migrator | 23:17:00 kafka | socket.send.buffer.bytes = 102400 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.423570756Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:17:00 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:00 policy-pap | heartbeat.interval.ms = 3000 23:17:00 kafka | ssl.cipher.suites = [] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.424369304Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=798.528µs 23:17:00 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:00 policy-db-migrator | 23:17:00 kafka | ssl.client.auth = none 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.427759646Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:17:00 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:00 policy-pap | interceptor.classes = [] 23:17:00 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:17:00 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.428843537Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.082291ms 23:17:00 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:00 policy-pap | internal.leave.group.on.close = true 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | ssl.endpoint.identification.algorithm = https 23:17:00 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.434502502Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:17:00 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:17:00 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.435705604Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.202182ms 23:17:00 policy-pap | isolation.level = read_uncommitted 23:17:00 kafka | ssl.engine.factory.class = null 23:17:00 policy-apex-pdp | sasl.login.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.439680922Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:17:00 kafka | ssl.key.password = null 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.448551088Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.868756ms 23:17:00 kafka | ssl.keymanager.algorithm = SunX509 23:17:00 policy-db-migrator | 23:17:00 policy-pap | max.partition.fetch.bytes = 1048576 23:17:00 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.452473876Z level=info msg="Executing migration" id="create api_key table v2" 23:17:00 kafka | ssl.keystore.certificate.chain = null 23:17:00 policy-db-migrator | 23:17:00 policy-pap | max.poll.interval.ms = 300000 23:17:00 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.453140062Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=665.956µs 23:17:00 kafka | ssl.keystore.key = null 23:17:00 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:17:00 policy-pap | max.poll.records = 500 23:17:00 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.457493674Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:17:00 kafka | ssl.keystore.location = null 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | metadata.max.age.ms = 300000 23:17:00 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.458258752Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=764.258µs 23:17:00 policy-pap | metric.reporters = [] 23:17:00 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:00 kafka | ssl.keystore.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.461644925Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:17:00 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | ssl.keystore.type = JKS 23:17:00 policy-pap | metrics.num.samples = 2 23:17:00 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:00 policy-db-migrator | 23:17:00 kafka | ssl.principal.mapping.rules = DEFAULT 23:17:00 policy-pap | metrics.recording.level = INFO 23:17:00 policy-db-migrator | 23:17:00 kafka | ssl.protocol = TLSv1.3 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.462818506Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.173131ms 23:17:00 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:00 kafka | ssl.provider = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.466392301Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:17:00 policy-pap | metrics.sample.window.ms = 30000 23:17:00 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:17:00 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:00 kafka | ssl.secure.random.implementation = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.467696413Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.302212ms 23:17:00 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:00 kafka | ssl.trustmanager.algorithm = PKIX 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.472036855Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:17:00 policy-pap | receive.buffer.bytes = 65536 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:00 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:00 kafka | ssl.truststore.certificates = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.472400449Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=363.334µs 23:17:00 policy-pap | reconnect.backoff.max.ms = 1000 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:00 kafka | ssl.truststore.location = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.475896913Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:17:00 policy-pap | reconnect.backoff.ms = 50 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:00 kafka | ssl.truststore.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.476430958Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=535.575µs 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:00 kafka | ssl.truststore.type = JKS 23:17:00 policy-pap | request.timeout.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.480679079Z level=info msg="Executing migration" id="Update api_key table charset" 23:17:00 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:17:00 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:00 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:17:00 policy-pap | retry.backoff.ms = 100 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.48071709Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=38.751µs 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:00 kafka | transaction.max.timeout.ms = 900000 23:17:00 policy-pap | sasl.client.callback.handler.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.485626227Z level=info msg="Executing migration" id="Add expires to api_key table" 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:17:00 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:00 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:17:00 policy-pap | sasl.jaas.config = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.489609446Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.982329ms 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:00 kafka | transaction.state.log.load.buffer.size = 5242880 23:17:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.493449003Z level=info msg="Executing migration" id="Add service account foreign key" 23:17:00 policy-db-migrator | 23:17:00 kafka | transaction.state.log.min.isr = 2 23:17:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:00 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.496937407Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.488234ms 23:17:00 policy-db-migrator | 23:17:00 kafka | transaction.state.log.num.partitions = 50 23:17:00 policy-pap | sasl.kerberos.service.name = null 23:17:00 policy-apex-pdp | security.providers = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.50038051Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:17:00 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:17:00 kafka | transaction.state.log.replication.factor = 3 23:17:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:00 policy-apex-pdp | send.buffer.bytes = 131072 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.500546162Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=165.552µs 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | transaction.state.log.segment.bytes = 104857600 23:17:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:00 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.503766133Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:17:00 kafka | transactional.id.expiration.ms = 604800000 23:17:00 policy-pap | sasl.login.callback.handler.class = null 23:17:00 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.506148546Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.382103ms 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | unclean.leader.election.enable = false 23:17:00 policy-pap | sasl.login.class = null 23:17:00 policy-apex-pdp | ssl.cipher.suites = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.510844441Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:17:00 policy-db-migrator | 23:17:00 kafka | unstable.api.versions.enable = false 23:17:00 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.513434657Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.584125ms 23:17:00 policy-db-migrator | 23:17:00 kafka | zookeeper.clientCnxnSocket = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.517580857Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:17:00 policy-pap | sasl.login.connect.timeout.ms = null 23:17:00 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:17:00 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:00 kafka | zookeeper.connect = zookeeper:2181 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.518272363Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=690.726µs 23:17:00 policy-pap | sasl.login.read.timeout.ms = null 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.engine.factory.class = null 23:17:00 kafka | zookeeper.connection.timeout.ms = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.52205278Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:17:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:00 policy-apex-pdp | ssl.key.password = null 23:17:00 kafka | zookeeper.max.in.flight.requests = 10 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.522581605Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=528.525µs 23:17:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:00 kafka | zookeeper.metadata.migration.enable = false 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.527363411Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:17:00 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:00 kafka | zookeeper.session.timeout.ms = 18000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.528538993Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.179552ms 23:17:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | ssl.keystore.key = null 23:17:00 kafka | zookeeper.set.acl = false 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.53243036Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:17:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:00 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:17:00 policy-apex-pdp | ssl.keystore.location = null 23:17:00 kafka | zookeeper.ssl.cipher.suites = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.533644752Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.213962ms 23:17:00 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.keystore.password = null 23:17:00 kafka | zookeeper.ssl.client.enable = false 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.538415119Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:17:00 policy-pap | sasl.mechanism = GSSAPI 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:17:00 policy-apex-pdp | ssl.keystore.type = JKS 23:17:00 kafka | zookeeper.ssl.crl.enable = false 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.539944403Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.529114ms 23:17:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:00 kafka | zookeeper.ssl.enabled.protocols = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.543636489Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:17:00 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | ssl.provider = null 23:17:00 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.545242875Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.609106ms 23:17:00 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:00 kafka | zookeeper.ssl.keystore.location = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.549010051Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:00 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:17:00 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:00 kafka | zookeeper.ssl.keystore.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.549130732Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=112.321µs 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.truststore.certificates = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.556331532Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:17:00 kafka | zookeeper.ssl.keystore.type = null 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:17:00 policy-apex-pdp | ssl.truststore.location = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.556371253Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=40.541µs 23:17:00 kafka | zookeeper.ssl.ocsp.enable = false 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | ssl.truststore.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.560930557Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:17:00 kafka | zookeeper.ssl.protocol = TLSv1.2 23:17:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:00 policy-apex-pdp | ssl.truststore.type = JKS 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.563813734Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.882367ms 23:17:00 kafka | zookeeper.ssl.truststore.location = null 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:00 policy-apex-pdp | transaction.timeout.ms = 60000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.567007655Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:17:00 kafka | zookeeper.ssl.truststore.password = null 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | transactional.id = null 23:17:00 kafka | zookeeper.ssl.truststore.type = null 23:17:00 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:17:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:00 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | security.protocol = PLAINTEXT 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.569768022Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.754607ms 23:17:00 kafka | (kafka.server.KafkaConfig) 23:17:00 policy-apex-pdp | 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:00 policy-pap | security.providers = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.575119294Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:17:00 kafka | [2024-01-21 23:14:32,158] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.225+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | send.buffer.bytes = 131072 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.575249685Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=83.991µs 23:17:00 kafka | [2024-01-21 23:14:32,160] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.245+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:00 policy-pap | session.timeout.ms = 45000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.578390416Z level=info msg="Executing migration" id="create quota table v1" 23:17:00 kafka | [2024-01-21 23:14:32,163] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.245+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.579139393Z level=info msg="Migration successfully executed" id="create quota table v1" duration=749.207µs 23:17:00 kafka | [2024-01-21 23:14:32,168] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.245+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878905245 23:17:00 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:00 kafka | [2024-01-21 23:14:32,195] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.583249183Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:17:00 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.246+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0644ab6b-245c-4d68-8e2b-62e7f136f852, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:00 kafka | [2024-01-21 23:14:32,200] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:17:00 policy-pap | ssl.cipher.suites = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.584660016Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.405063ms 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.246+00:00|INFO|ServiceManager|main] service manager starting set alive 23:17:00 kafka | [2024-01-21 23:14:32,210] INFO Loaded 0 logs in 15ms (kafka.log.LogManager) 23:17:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.588438923Z level=info msg="Executing migration" id="Update quota table charset" 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.246+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:17:00 kafka | [2024-01-21 23:14:32,213] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:17:00 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.588479484Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.811µs 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.251+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:17:00 kafka | [2024-01-21 23:14:32,214] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:17:00 policy-pap | ssl.engine.factory.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.592903996Z level=info msg="Executing migration" id="create plugin_setting table" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.252+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:17:00 kafka | [2024-01-21 23:14:32,228] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:17:00 policy-pap | ssl.key.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.594081898Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.177662ms 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.260+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:17:00 kafka | [2024-01-21 23:14:32,281] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:17:00 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.597755393Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:17:00 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.260+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:17:00 kafka | [2024-01-21 23:14:32,304] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:17:00 policy-pap | ssl.keystore.certificate.chain = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.598678042Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=921.659µs 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.260+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:17:00 kafka | [2024-01-21 23:14:32,316] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:17:00 policy-pap | ssl.keystore.key = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.602233517Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.260+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e43a1262-c2bd-4185-8b6c-0623a45ad046, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 23:17:00 kafka | [2024-01-21 23:14:32,357] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:17:00 policy-pap | ssl.keystore.location = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.605443778Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.208981ms 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.261+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e43a1262-c2bd-4185-8b6c-0623a45ad046, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:17:00 kafka | [2024-01-21 23:14:32,702] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:17:00 policy-pap | ssl.keystore.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.609489107Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.261+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:17:00 kafka | [2024-01-21 23:14:32,732] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:17:00 policy-pap | ssl.keystore.type = JKS 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.609516367Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=28.64µs 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.281+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:17:00 kafka | [2024-01-21 23:14:32,733] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:17:00 policy-pap | ssl.protocol = TLSv1.3 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.612067342Z level=info msg="Executing migration" id="create session table" 23:17:00 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:17:00 policy-apex-pdp | [] 23:17:00 kafka | [2024-01-21 23:14:32,740] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:17:00 policy-pap | ssl.provider = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.61292196Z level=info msg="Migration successfully executed" id="create session table" duration=846.688µs 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.283+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:17:00 kafka | [2024-01-21 23:14:32,745] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:17:00 policy-pap | ssl.secure.random.implementation = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.616408244Z level=info msg="Executing migration" id="Drop old table playlist table" 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ab3de409-b2b8-4395-82ea-8036f980806d","timestampMs":1705878905260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:00 kafka | [2024-01-21 23:14:32,765] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:00 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.616529665Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=122.091µs 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.511+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:17:00 kafka | [2024-01-21 23:14:32,767] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:00 policy-pap | ssl.truststore.certificates = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.62216354Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.511+00:00|INFO|ServiceManager|main] service manager starting 23:17:00 kafka | [2024-01-21 23:14:32,769] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:00 policy-pap | ssl.truststore.location = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.622291471Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=128.741µs 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.511+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:17:00 policy-pap | ssl.truststore.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.626014717Z level=info msg="Executing migration" id="create playlist table v2" 23:17:00 policy-db-migrator | 23:17:00 kafka | [2024-01-21 23:14:32,771] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.511+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 policy-pap | ssl.truststore.type = JKS 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.627062707Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.04739ms 23:17:00 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:17:00 kafka | [2024-01-21 23:14:32,784] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.524+00:00|INFO|ServiceManager|main] service manager started 23:17:00 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.631121277Z level=info msg="Executing migration" id="create playlist item table v2" 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | [2024-01-21 23:14:32,819] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.524+00:00|INFO|ServiceManager|main] service manager started 23:17:00 policy-pap | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.63244618Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.324632ms 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:17:00 kafka | [2024-01-21 23:14:32,866] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705878872850,1705878872850,1,0,0,72057610932846593,258,0,27 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.524+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:17:00 policy-pap | [2024-01-21T23:15:03.829+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.635909843Z level=info msg="Executing migration" id="Update playlist table charset" 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | (kafka.zk.KafkaZkClient) 23:17:00 policy-pap | [2024-01-21T23:15:03.829+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.524+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.635946253Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=37.56µs 23:17:00 policy-db-migrator | 23:17:00 kafka | [2024-01-21 23:14:32,867] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:17:00 policy-pap | [2024-01-21T23:15:03.829+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878903829 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.641+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Cluster ID: -jrszSKtSKq5TnXDeh3xeA 23:17:00 policy-db-migrator | 23:17:00 policy-pap | [2024-01-21T23:15:03.829+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Subscribed to topic(s): policy-pdp-pap 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.641+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: -jrszSKtSKq5TnXDeh3xeA 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.640495617Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:17:00 kafka | [2024-01-21 23:14:32,937] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:17:00 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.642+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.640521478Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=31.11µs 23:17:00 kafka | [2024-01-21 23:14:32,944] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:00 policy-pap | [2024-01-21T23:15:03.831+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.643703638Z level=info msg="Executing migration" id="Add playlist column created_at" 23:17:00 kafka | [2024-01-21 23:14:32,951] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:00 policy-pap | [2024-01-21T23:15:03.831+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=988c2327-3928-4e75-b348-c4ca60151503, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7bb86ac 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.643+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.648148472Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.444004ms 23:17:00 kafka | [2024-01-21 23:14:32,951] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:00 policy-pap | [2024-01-21T23:15:03.831+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=988c2327-3928-4e75-b348-c4ca60151503, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] (Re-)joining group 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.651154591Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:17:00 kafka | [2024-01-21 23:14:32,966] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:17:00 policy-pap | [2024-01-21T23:15:03.831+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.664+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Request joining group due to: need to re-join with the given member-id: consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.654505623Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.351492ms 23:17:00 kafka | [2024-01-21 23:14:32,969] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:17:00 policy-pap | allow.auto.create.topics = true 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.664+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.658900145Z level=info msg="Executing migration" id="drop preferences table v2" 23:17:00 kafka | [2024-01-21 23:14:32,977] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:17:00 policy-pap | auto.commit.interval.ms = 5000 23:17:00 policy-apex-pdp | [2024-01-21T23:15:05.664+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] (Re-)joining group 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.659002926Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=103.931µs 23:17:00 kafka | [2024-01-21 23:14:32,986] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:17:00 policy-pap | auto.include.jmx.reporter = true 23:17:00 policy-apex-pdp | [2024-01-21T23:15:06.172+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:17:00 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.662296539Z level=info msg="Executing migration" id="drop preferences table v3" 23:17:00 kafka | [2024-01-21 23:14:32,990] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:17:00 policy-pap | auto.offset.reset = latest 23:17:00 policy-apex-pdp | [2024-01-21T23:15:06.172+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.662379329Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=83.341µs 23:17:00 kafka | [2024-01-21 23:14:32,996] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:17:00 policy-pap | bootstrap.servers = [kafka:9092] 23:17:00 policy-apex-pdp | [2024-01-21T23:15:08.670+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Successfully joined group with generation Generation{generationId=1, memberId='consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b', protocol='range'} 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.665929304Z level=info msg="Executing migration" id="create preferences table v3" 23:17:00 kafka | [2024-01-21 23:14:33,003] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:17:00 policy-pap | check.crcs = true 23:17:00 policy-apex-pdp | [2024-01-21T23:15:08.677+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Finished assignment for group at generation 1: {consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b=Assignment(partitions=[policy-pdp-pap-0])} 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.667084035Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.156081ms 23:17:00 kafka | [2024-01-21 23:14:33,013] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:17:00 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:00 policy-apex-pdp | [2024-01-21T23:15:08.687+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Successfully synced group in generation Generation{generationId=1, memberId='consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b', protocol='range'} 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.672528458Z level=info msg="Executing migration" id="Update preferences table charset" 23:17:00 kafka | [2024-01-21 23:14:33,013] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:17:00 policy-pap | client.id = consumer-policy-pap-4 23:17:00 policy-apex-pdp | [2024-01-21T23:15:08.687+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.672569698Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=37.221µs 23:17:00 kafka | [2024-01-21 23:14:33,035] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 23:17:00 policy-pap | client.rack = 23:17:00 policy-apex-pdp | [2024-01-21T23:15:08.688+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Adding newly assigned partitions: policy-pdp-pap-0 23:17:00 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.678500985Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:17:00 kafka | [2024-01-21 23:14:33,035] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:17:00 policy-pap | connections.max.idle.ms = 540000 23:17:00 policy-apex-pdp | [2024-01-21T23:15:08.697+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Found no committed offset for partition policy-pdp-pap-0 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.682532295Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.03157ms 23:17:00 kafka | [2024-01-21 23:14:33,052] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:17:00 policy-pap | default.api.timeout.ms = 60000 23:17:00 policy-apex-pdp | [2024-01-21T23:15:08.708+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2, groupId=e43a1262-c2bd-4185-8b6c-0623a45ad046] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.685905787Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:17:00 policy-pap | enable.auto.commit = true 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.261+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.686066399Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=163.892µs 23:17:00 kafka | [2024-01-21 23:14:33,057] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:17:00 policy-pap | exclude.internal.topics = true 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"317323ab-8653-4275-bd16-05c52ce9a052","timestampMs":1705878925260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.691088068Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:17:00 kafka | [2024-01-21 23:14:33,061] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:17:00 policy-pap | fetch.max.bytes = 52428800 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.289+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.694201018Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.11712ms 23:17:00 kafka | [2024-01-21 23:14:33,062] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:00 policy-pap | fetch.max.wait.ms = 500 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"317323ab-8653-4275-bd16-05c52ce9a052","timestampMs":1705878925260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:00 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.699764471Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:17:00 kafka | [2024-01-21 23:14:33,088] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:17:00 policy-pap | fetch.min.bytes = 1 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.293+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.702110444Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.346083ms 23:17:00 kafka | [2024-01-21 23:14:33,095] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:17:00 policy-pap | group.id = policy-pap 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.444+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.70686502Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:17:00 kafka | [2024-01-21 23:14:33,095] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:17:00 policy-pap | group.instance.id = null 23:17:00 policy-apex-pdp | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.706982522Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=117.281µs 23:17:00 kafka | [2024-01-21 23:14:33,105] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:17:00 policy-pap | heartbeat.interval.ms = 3000 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.455+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.710068861Z level=info msg="Executing migration" id="Add preferences index org_id" 23:17:00 kafka | [2024-01-21 23:14:33,119] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:17:00 policy-pap | interceptor.classes = [] 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"15fcabe4-fb3e-47f6-b4c1-43b4541365cb","timestampMs":1705878925454,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.710838109Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=769.158µs 23:17:00 kafka | [2024-01-21 23:14:33,121] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:17:00 policy-pap | internal.leave.group.on.close = true 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.456+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:17:00 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.716489933Z level=info msg="Executing migration" id="Add preferences index user_id" 23:17:00 kafka | [2024-01-21 23:14:33,121] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:17:00 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.459+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.71719723Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=707.507µs 23:17:00 kafka | [2024-01-21 23:14:33,121] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c3d46f28-2b6f-4c92-8ce2-04ffb23d1149","timestampMs":1705878925459,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.719986087Z level=info msg="Executing migration" id="create alert table v1" 23:17:00 policy-pap | isolation.level = read_uncommitted 23:17:00 kafka | [2024-01-21 23:14:33,122] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.470+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.720813195Z level=info msg="Migration successfully executed" id="create alert table v1" duration=826.438µs 23:17:00 kafka | [2024-01-21 23:14:33,125] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"15fcabe4-fb3e-47f6-b4c1-43b4541365cb","timestampMs":1705878925454,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:00 policy-db-migrator | 23:17:00 policy-pap | max.partition.fetch.bytes = 1048576 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.725610222Z level=info msg="Executing migration" id="add index alert org_id & id " 23:17:00 kafka | [2024-01-21 23:14:33,126] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.470+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:00 policy-db-migrator | 23:17:00 policy-pap | max.poll.interval.ms = 300000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.727410439Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.805477ms 23:17:00 kafka | [2024-01-21 23:14:33,127] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.474+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:17:00 policy-pap | max.poll.records = 500 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.731215286Z level=info msg="Executing migration" id="add index alert state" 23:17:00 kafka | [2024-01-21 23:14:33,127] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c3d46f28-2b6f-4c92-8ce2-04ffb23d1149","timestampMs":1705878925459,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-pap | metadata.max.age.ms = 300000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.732122045Z level=info msg="Migration successfully executed" id="add index alert state" duration=906.709µs 23:17:00 kafka | [2024-01-21 23:14:33,131] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.474+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:00 policy-pap | metric.reporters = [] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.73681988Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:17:00 kafka | [2024-01-21 23:14:33,132] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.509+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:00 policy-pap | metrics.num.samples = 2 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.738141853Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.321793ms 23:17:00 kafka | [2024-01-21 23:14:33,132] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:17:00 policy-db-migrator | 23:17:00 policy-apex-pdp | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-pap | metrics.recording.level = INFO 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.742133762Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:17:00 kafka | [2024-01-21 23:14:33,134] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.512+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | 23:17:00 policy-pap | metrics.sample.window.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.743125662Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=991.7µs 23:17:00 kafka | [2024-01-21 23:14:33,144] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"52c4938e-a200-46d0-81f3-21a9a4d3de9b","timestampMs":1705878925511,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:17:00 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.756114947Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:17:00 kafka | [2024-01-21 23:14:33,161] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.519+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | receive.buffer.bytes = 65536 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.757082307Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=962.88µs 23:17:00 kafka | [2024-01-21 23:14:33,161] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"52c4938e-a200-46d0-81f3-21a9a4d3de9b","timestampMs":1705878925511,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:17:00 policy-pap | reconnect.backoff.max.ms = 1000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.761659281Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:17:00 kafka | [2024-01-21 23:14:33,161] INFO Kafka startTimeMs: 1705878873147 (org.apache.kafka.common.utils.AppInfoParser) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.519+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | reconnect.backoff.ms = 50 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.762927703Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.274302ms 23:17:00 kafka | [2024-01-21 23:14:33,163] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.535+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | 23:17:00 policy-pap | request.timeout.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.766340636Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:17:00 kafka | [2024-01-21 23:14:33,164] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:17:00 policy-apex-pdp | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"04a1ac11-bc72-4cab-ab24-e9132afd087a","timestampMs":1705878925515,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-db-migrator | 23:17:00 policy-pap | retry.backoff.ms = 100 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.783431982Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=17.088946ms 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.536+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:17:00 policy-pap | sasl.client.callback.handler.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.790838454Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:17:00 kafka | [2024-01-21 23:14:33,165] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"04a1ac11-bc72-4cab-ab24-e9132afd087a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2ab7fd29-16ad-4d9b-982a-342f9d03040b","timestampMs":1705878925536,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.jaas.config = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.791402719Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=563.625µs 23:17:00 kafka | [2024-01-21 23:14:33,177] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.543+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.794491449Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:17:00 kafka | [2024-01-21 23:14:33,178] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:17:00 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"04a1ac11-bc72-4cab-ab24-e9132afd087a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2ab7fd29-16ad-4d9b-982a-342f9d03040b","timestampMs":1705878925536,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.795456068Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=964.309µs 23:17:00 kafka | [2024-01-21 23:14:33,179] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:25.544+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.kerberos.service.name = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.798570489Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:17:00 kafka | [2024-01-21 23:14:33,183] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:17:00 policy-apex-pdp | [2024-01-21T23:15:56.153+00:00|INFO|RequestLog|qtp830863979-29] 172.17.0.2 - policyadmin [21/Jan/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10642 "-" "Prometheus/2.49.1" 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.798889472Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=336.744µs 23:17:00 kafka | [2024-01-21 23:14:33,194] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:17:00 policy-apex-pdp | [2024-01-21T23:16:56.081+00:00|INFO|RequestLog|qtp830863979-30] 172.17.0.2 - policyadmin [21/Jan/2024:23:16:56 +0000] "GET /metrics HTTP/1.1" 200 10644 "-" "Prometheus/2.49.1" 23:17:00 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:17:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.804685528Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:17:00 kafka | [2024-01-21 23:14:33,201] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.login.callback.handler.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.805599907Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=913.909µs 23:17:00 kafka | [2024-01-21 23:14:33,201] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:17:00 policy-pap | sasl.login.class = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.810326452Z level=info msg="Executing migration" id="create alert_notification table v1" 23:17:00 kafka | [2024-01-21 23:14:33,212] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.login.connect.timeout.ms = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.8121889Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.869708ms 23:17:00 kafka | [2024-01-21 23:14:33,213] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.login.read.timeout.ms = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.816593883Z level=info msg="Executing migration" id="Add column is_default" 23:17:00 kafka | [2024-01-21 23:14:33,213] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.820268359Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.674116ms 23:17:00 kafka | [2024-01-21 23:14:33,214] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:17:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.825908433Z level=info msg="Executing migration" id="Add column frequency" 23:17:00 kafka | [2024-01-21 23:14:33,215] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.828956993Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.04871ms 23:17:00 kafka | [2024-01-21 23:14:33,237] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:17:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.833303915Z level=info msg="Executing migration" id="Add column send_reminder" 23:17:00 kafka | [2024-01-21 23:14:33,289] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.837615567Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.311752ms 23:17:00 kafka | [2024-01-21 23:14:33,292] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.842139201Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:17:00 kafka | [2024-01-21 23:14:33,356] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.mechanism = GSSAPI 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.845467803Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.329422ms 23:17:00 kafka | [2024-01-21 23:14:38,239] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:17:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.855081266Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:17:00 kafka | [2024-01-21 23:14:38,241] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.856282188Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.203052ms 23:17:00 kafka | [2024-01-21 23:15:04,428] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:17:00 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.859936893Z level=info msg="Executing migration" id="Update alert table charset" 23:17:00 kafka | [2024-01-21 23:15:04,437] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.859966154Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=29.421µs 23:17:00 kafka | [2024-01-21 23:15:04,455] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.863987472Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:17:00 kafka | [2024-01-21 23:15:04,468] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.864127474Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=140.272µs 23:17:00 kafka | [2024-01-21 23:15:04,495] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(9Lf29r26S7WDxCJgkjd7Yg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:17:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.872967679Z level=info msg="Executing migration" id="create notification_journal table v1" 23:17:00 kafka | [2024-01-21 23:15:04,496] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.874458804Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.493725ms 23:17:00 kafka | [2024-01-21 23:15:04,498] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.883094657Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:17:00 kafka | [2024-01-21 23:15:04,498] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.884307869Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.213192ms 23:17:00 kafka | [2024-01-21 23:15:04,503] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | security.protocol = PLAINTEXT 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.889319268Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:17:00 kafka | [2024-01-21 23:15:04,503] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | security.providers = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.890155226Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=835.358µs 23:17:00 kafka | [2024-01-21 23:15:04,533] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:00 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:17:00 policy-pap | send.buffer.bytes = 131072 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.894327176Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:17:00 kafka | [2024-01-21 23:15:04,545] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | session.timeout.ms = 45000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.895151324Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=820.298µs 23:17:00 kafka | [2024-01-21 23:15:04,548] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 23:17:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.90191198Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:17:00 kafka | [2024-01-21 23:15:04,553] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:00 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.904087591Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=2.179902ms 23:17:00 kafka | [2024-01-21 23:15:04,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:17:00 policy-db-migrator | -------------- 23:17:00 policy-pap | ssl.cipher.suites = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.910790656Z level=info msg="Executing migration" id="Add for to alert table" 23:17:00 kafka | [2024-01-21 23:15:04,554] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.913673884Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.882818ms 23:17:00 kafka | [2024-01-21 23:15:04,557] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:17:00 policy-db-migrator | 23:17:00 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.916726513Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:17:00 kafka | [2024-01-21 23:15:04,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:00 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:17:00 kafka | [2024-01-21 23:15:04,565] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(d61QdiLrRDGfXeRddxpvYw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.920712922Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.987539ms 23:17:00 policy-pap | ssl.engine.factory.class = null 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.923961013Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:17:00 kafka | [2024-01-21 23:15:04,566] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:17:00 policy-pap | ssl.key.password = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.924240616Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=279.153µs 23:17:00 kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.929246804Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:17:00 kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.keystore.certificate.chain = null 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.930239974Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=991.8µs 23:17:00 kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.keystore.key = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.934855749Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:17:00 kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.keystore.location = null 23:17:00 policy-db-migrator | 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.935762178Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=905.889µs 23:17:00 kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.keystore.password = null 23:17:00 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.951200607Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:17:00 kafka | [2024-01-21 23:15:04,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.keystore.type = JKS 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.957875362Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.676525ms 23:17:00 kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.protocol = TLSv1.3 23:17:00 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.963390465Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:17:00 kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.provider = null 23:17:00 policy-db-migrator | -------------- 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.963465486Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=74.741µs 23:17:00 kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-db-migrator | 23:17:00 kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.secure.random.implementation = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.967272433Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:17:00 kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.968292263Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.01956ms 23:17:00 policy-db-migrator | 23:17:00 kafka | [2024-01-21 23:15:04,568] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.truststore.certificates = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.971618725Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:17:00 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:17:00 kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:00 policy-pap | ssl.truststore.location = null 23:17:00 grafana | logger=migrator t=2024-01-21T23:14:31.972777976Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.159261ms 23:17:00 policy-db-migrator | -------------- 23:17:00 kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-pap | ssl.truststore.password = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:31.978424061Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:01 kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-pap | ssl.truststore.type = JKS 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:31.978684973Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=255.532µs 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:31.983684912Z level=info msg="Executing migration" id="create annotation table v5" 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-pap | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:31.984628291Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=942.289µs 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,569] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:31.988141965Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:17:01 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:17:01 kafka | [2024-01-21 23:15:04,570] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:31.989085514Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=943.099µs 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,570] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878903835 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:31.99284579Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:17:01 kafka | [2024-01-21 23:15:04,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:31.994727459Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.881569ms 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.000639156Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:03.835+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.001746997Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.106081ms 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:03.836+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=988c2327-3928-4e75-b348-c4ca60151503, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.006099129Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:17:01 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:17:01 policy-pap | [2024-01-21T23:15:03.836+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0096ba3d-86d0-4a50-8361-ec89b03a0194, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.007689074Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.589505ms 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:03.836+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3bade7b5-8875-4f5c-b873-2f3ab75fe5de, alive=false, publisher=null]]: starting 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.013615661Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:17:01 policy-pap | [2024-01-21T23:15:03.875+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.014849923Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.237892ms 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | acks = -1 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.018533748Z level=info msg="Executing migration" id="Update annotation table charset" 23:17:01 policy-db-migrator | 23:17:01 policy-pap | auto.include.jmx.reporter = true 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.018571558Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=39.71µs 23:17:01 policy-db-migrator | 23:17:01 policy-pap | batch.size = 16384 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.023930369Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:17:01 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:17:01 policy-pap | bootstrap.servers = [kafka:9092] 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.028149469Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.21814ms 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | buffer.memory = 33554432 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.032669943Z level=info msg="Executing migration" id="Drop category_id index" 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:01 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:01 kafka | [2024-01-21 23:15:04,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.03349561Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=825.287µs 23:17:01 policy-pap | client.id = producer-1 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.037472758Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:17:01 policy-pap | compression.type = none 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.043812569Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.334891ms 23:17:01 policy-pap | connections.max.idle.ms = 540000 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.049518533Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:17:01 policy-pap | delivery.timeout.ms = 120000 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.050014268Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=496.395µs 23:17:01 policy-pap | enable.idempotence = true 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.055821923Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:17:01 policy-pap | interceptor.classes = [] 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.058515759Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=2.687036ms 23:17:01 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.063551817Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:17:01 policy-pap | linger.ms = 0 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.064514246Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=962.519µs 23:17:01 policy-pap | max.block.ms = 60000 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | max.in.flight.requests.per.connection = 5 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.068476655Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.082992723Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.508779ms 23:17:01 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:17:01 policy-pap | max.request.size = 1048576 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.089108281Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | metadata.max.age.ms = 300000 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.089720027Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=619.486µs 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:01 policy-pap | metadata.max.idle.ms = 300000 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.094658444Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | metric.reporters = [] 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.095555623Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=897.329µs 23:17:01 policy-db-migrator | 23:17:01 policy-pap | metrics.num.samples = 2 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.09944547Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:17:01 policy-db-migrator | 23:17:01 policy-pap | metrics.recording.level = INFO 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.099728603Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=283.433µs 23:17:01 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:17:01 policy-pap | metrics.sample.window.ms = 30000 23:17:01 kafka | [2024-01-21 23:15:04,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | partitioner.adaptive.partitioning.enable = true 23:17:01 kafka | [2024-01-21 23:15:04,595] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.105808421Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:17:01 policy-pap | partitioner.availability.timeout.ms = 0 23:17:01 kafka | [2024-01-21 23:15:04,596] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.10678115Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=978.03µs 23:17:01 policy-pap | partitioner.class = null 23:17:01 kafka | [2024-01-21 23:15:04,597] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.112916338Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:17:01 policy-pap | partitioner.ignore.keys = false 23:17:01 kafka | [2024-01-21 23:15:04,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.113207141Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=285.923µs 23:17:01 policy-pap | receive.buffer.bytes = 32768 23:17:01 kafka | [2024-01-21 23:15:04,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.118803774Z level=info msg="Executing migration" id="Add created time to annotation table" 23:17:01 policy-pap | reconnect.backoff.max.ms = 1000 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.123330508Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.534054ms 23:17:01 policy-pap | reconnect.backoff.ms = 50 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.129529347Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:17:01 policy-pap | request.timeout.ms = 30000 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.134422314Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.895317ms 23:17:01 policy-pap | retries = 2147483647 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.137902387Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:17:01 policy-pap | retry.backoff.ms = 100 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.138704365Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=802.028µs 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.client.callback.handler.class = null 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.143691792Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.jaas.config = null 23:17:01 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.144668742Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=977.24µs 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.147961863Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.148258356Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=329.213µs 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.service.name = null 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.151425546Z level=info msg="Executing migration" id="Add epoch_end column" 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.156093651Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.667065ms 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.163504541Z level=info msg="Executing migration" id="Add index for epoch_end" 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.login.callback.handler.class = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.16446623Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=965.659µs 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.login.class = null 23:17:01 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.168990104Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.login.connect.timeout.ms = null 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.169174835Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=181.841µs 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.login.read.timeout.ms = null 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.172495557Z level=info msg="Executing migration" id="Move region to single row" 23:17:01 kafka | [2024-01-21 23:15:04,599] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 23:17:01 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.172873401Z level=info msg="Migration successfully executed" id="Move region to single row" duration=377.594µs 23:17:01 kafka | [2024-01-21 23:15:04,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.17805979Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:17:01 kafka | [2024-01-21 23:15:04,602] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 23:17:01 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.179371193Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.310853ms 23:17:01 kafka | [2024-01-21 23:15:04,603] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:01 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.184674993Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.186021566Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.349223ms 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.190311107Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.mechanism = GSSAPI 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.191440728Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.140731ms 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.197120542Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.197987971Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=866.909µs 23:17:01 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.20524434Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.206775614Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.536224ms 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.215246275Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.215993482Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=747.667µs 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.220942999Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.221078651Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=139.342µs 23:17:01 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.229944235Z level=info msg="Executing migration" id="create test_data table" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.231123177Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.183852ms 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | security.protocol = PLAINTEXT 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.236193335Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | security.providers = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.237064863Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=873.368µs 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | send.buffer.bytes = 131072 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.242591726Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.243451734Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=860.108µs 23:17:01 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.246729235Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.cipher.suites = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.247600964Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=871.419µs 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:17:01 kafka | [2024-01-21 23:15:04,607] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.252586531Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,607] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.252792883Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=209.222µs 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,607] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.engine.factory.class = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.256852602Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.257215966Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=363.524µs 23:17:01 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:17:01 kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.key.password = null 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.267428903Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.267527754Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=103.511µs 23:17:01 kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.keystore.certificate.chain = null 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.272153688Z level=info msg="Executing migration" id="create team table" 23:17:01 kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.keystore.key = null 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.273217118Z level=info msg="Migration successfully executed" id="create team table" duration=1.06294ms 23:17:01 kafka | [2024-01-21 23:15:04,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.keystore.location = null 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.278247016Z level=info msg="Executing migration" id="add index team.org_id" 23:17:01 kafka | [2024-01-21 23:15:04,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.keystore.password = null 23:17:01 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.27966459Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.417794ms 23:17:01 kafka | [2024-01-21 23:15:04,610] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:17:01 policy-pap | ssl.keystore.type = JKS 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.283381945Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:17:01 kafka | [2024-01-21 23:15:04,611] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:01 policy-pap | ssl.protocol = TLSv1.3 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.284293824Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=911.089µs 23:17:01 kafka | [2024-01-21 23:15:04,675] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-pap | ssl.provider = null 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.288652126Z level=info msg="Executing migration" id="Add column uid in team" 23:17:01 kafka | [2024-01-21 23:15:04,692] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.293623533Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.969877ms 23:17:01 policy-pap | ssl.secure.random.implementation = null 23:17:01 kafka | [2024-01-21 23:15:04,694] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.303969252Z level=info msg="Executing migration" id="Update uid column values in team" 23:17:01 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:01 kafka | [2024-01-21 23:15:04,695] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.304293635Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=326.243µs 23:17:01 policy-pap | ssl.truststore.certificates = null 23:17:01 kafka | [2024-01-21 23:15:04,697] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(9Lf29r26S7WDxCJgkjd7Yg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.312902887Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:17:01 policy-pap | ssl.truststore.location = null 23:17:01 kafka | [2024-01-21 23:15:04,709] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:17:01 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.314078138Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.178901ms 23:17:01 policy-pap | ssl.truststore.password = null 23:17:01 kafka | [2024-01-21 23:15:04,719] INFO [Broker id=1] Finished LeaderAndIsr request in 162ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.320412029Z level=info msg="Executing migration" id="create team member table" 23:17:01 policy-pap | ssl.truststore.type = JKS 23:17:01 kafka | [2024-01-21 23:15:04,722] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=9Lf29r26S7WDxCJgkjd7Yg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.321151436Z level=info msg="Migration successfully executed" id="create team member table" duration=739.127µs 23:17:01 policy-pap | transaction.timeout.ms = 60000 23:17:01 kafka | [2024-01-21 23:15:04,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.324346567Z level=info msg="Executing migration" id="add index team_member.org_id" 23:17:01 policy-pap | transactional.id = null 23:17:01 kafka | [2024-01-21 23:15:04,731] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.325423777Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.07557ms 23:17:01 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:01 kafka | [2024-01-21 23:15:04,734] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.332495114Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:17:01 policy-pap | 23:17:01 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.333602205Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.113371ms 23:17:01 policy-pap | [2024-01-21T23:15:03.892+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:17:01 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.336682074Z level=info msg="Executing migration" id="add index team_member.team_id" 23:17:01 policy-pap | [2024-01-21T23:15:03.910+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:01 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.337366921Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=684.467µs 23:17:01 policy-pap | [2024-01-21T23:15:03.910+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.340623762Z level=info msg="Executing migration" id="Add column email to team table" 23:17:01 policy-pap | [2024-01-21T23:15:03.910+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878903910 23:17:01 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:17:01 policy-pap | [2024-01-21T23:15:03.910+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3bade7b5-8875-4f5c-b873-2f3ab75fe5de, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.345060874Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.436442ms 23:17:01 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.352960689Z level=info msg="Executing migration" id="Add column external to team_member table" 23:17:01 policy-pap | [2024-01-21T23:15:03.911+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a089b753-ae7e-4ef2-9693-63ba0de08080, alive=false, publisher=null]]: starting 23:17:01 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:17:01 policy-pap | [2024-01-21T23:15:03.911+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:01 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.357428212Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.467283ms 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | acks = -1 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.363208587Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | auto.include.jmx.reporter = true 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.36771873Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.512283ms 23:17:01 policy-db-migrator | 23:17:01 policy-pap | batch.size = 16384 23:17:01 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:17:01 kafka | [2024-01-21 23:15:04,758] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.371097192Z level=info msg="Executing migration" id="create dashboard acl table" 23:17:01 policy-pap | bootstrap.servers = [kafka:9092] 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.371957251Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=860.159µs 23:17:01 policy-pap | buffer.memory = 33554432 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.377441493Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:17:01 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.378331171Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=893.838µs 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | client.id = producer-2 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.383561351Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | compression.type = none 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.384323449Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=761.498µs 23:17:01 policy-db-migrator | 23:17:01 policy-pap | connections.max.idle.ms = 540000 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.387175416Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | delivery.timeout.ms = 120000 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.387843523Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=667.777µs 23:17:01 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | enable.idempotence = true 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.391972082Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | interceptor.classes = [] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.392691319Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=719.147µs 23:17:01 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | linger.ms = 0 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.400549804Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.401321141Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=771.057µs 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | max.block.ms = 60000 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.404900125Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | max.in.flight.requests.per.connection = 5 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.405555702Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=655.416µs 23:17:01 policy-pap | max.request.size = 1048576 23:17:01 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.408920364Z level=info msg="Executing migration" id="add index dashboard_permission" 23:17:01 policy-pap | metadata.max.age.ms = 300000 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,759] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.40954675Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=626.676µs 23:17:01 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | metadata.max.idle.ms = 300000 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.41583247Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.416224883Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=392.704µs 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | metric.reporters = [] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.419205742Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | metrics.num.samples = 2 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.419373043Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=166.761µs 23:17:01 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:17:01 policy-pap | metrics.recording.level = INFO 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.423977637Z level=info msg="Executing migration" id="create tag table" 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | metrics.sample.window.ms = 30000 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.424475822Z level=info msg="Migration successfully executed" id="create tag table" duration=498.385µs 23:17:01 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | partitioner.adaptive.partitioning.enable = true 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.427758153Z level=info msg="Executing migration" id="add index tag.key_value" 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | partitioner.availability.timeout.ms = 0 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.428607461Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=848.948µs 23:17:01 policy-pap | partitioner.class = null 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.434868611Z level=info msg="Executing migration" id="create login attempt table" 23:17:01 policy-db-migrator | 23:17:01 policy-pap | partitioner.ignore.keys = false 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.435361836Z level=info msg="Migration successfully executed" id="create login attempt table" duration=493.155µs 23:17:01 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:17:01 policy-pap | receive.buffer.bytes = 32768 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.443063059Z level=info msg="Executing migration" id="add index login_attempt.username" 23:17:01 policy-pap | reconnect.backoff.max.ms = 1000 23:17:01 kafka | [2024-01-21 23:15:04,760] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 policy-pap | reconnect.backoff.ms = 50 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.443697705Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=634.636µs 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | request.timeout.ms = 30000 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.449369839Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:17:01 policy-pap | retries = 2147483647 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.450257808Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=892.539µs 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | retry.backoff.ms = 100 23:17:01 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.453762981Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | sasl.client.callback.handler.class = null 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.472943165Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.177984ms 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | sasl.jaas.config = null 23:17:01 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.478906102Z level=info msg="Executing migration" id="create login_attempt v2" 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.479596778Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=690.457µs 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.486814507Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.service.name = null 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:01 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.48822645Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.411553ms 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.492152718Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:17:01 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.492610722Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=457.754µs 23:17:01 policy-pap | sasl.login.callback.handler.class = null 23:17:01 kafka | [2024-01-21 23:15:04,761] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.496699461Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:17:01 policy-pap | sasl.login.class = null 23:17:01 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.497279047Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=576.276µs 23:17:01 policy-pap | sasl.login.connect.timeout.ms = null 23:17:01 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.502083383Z level=info msg="Executing migration" id="create user auth table" 23:17:01 policy-pap | sasl.login.read.timeout.ms = null 23:17:01 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.502746129Z level=info msg="Migration successfully executed" id="create user auth table" duration=662.526µs 23:17:01 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:01 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.506227202Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:17:01 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:01 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.507636565Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.408993ms 23:17:01 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:01 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.511686684Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:17:01 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:01 kafka | [2024-01-21 23:15:04,762] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.511786565Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=100.941µs 23:17:01 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:01 kafka | [2024-01-21 23:15:04,762] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.521004673Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:17:01 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,762] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.529164041Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.157828ms 23:17:01 policy-pap | sasl.mechanism = GSSAPI 23:17:01 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:17:01 kafka | [2024-01-21 23:15:04,762] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.532613974Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:17:01 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.536213679Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.597864ms 23:17:01 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:01 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.539867163Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:17:01 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.544813711Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.945617ms 23:17:01 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.548621727Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:17:01 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.553513933Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.895096ms 23:17:01 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:01 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.566101924Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:17:01 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.567193794Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.096431ms 23:17:01 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:01 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.571318063Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:17:01 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.575035219Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.717016ms 23:17:01 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,763] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.580954425Z level=info msg="Executing migration" id="create server_lock table" 23:17:01 policy-pap | security.protocol = PLAINTEXT 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.58147798Z level=info msg="Migration successfully executed" id="create server_lock table" duration=523.465µs 23:17:01 policy-pap | security.providers = null 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.584984794Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:17:01 policy-pap | send.buffer.bytes = 131072 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.5856272Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=642.266µs 23:17:01 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.589228945Z level=info msg="Executing migration" id="create user auth token table" 23:17:01 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:01 policy-pap | ssl.cipher.suites = null 23:17:01 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.58978338Z level=info msg="Migration successfully executed" id="create user auth token table" duration=555.105µs 23:17:01 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.595437374Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:17:01 policy-pap | ssl.engine.factory.class = null 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.596246451Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=808.887µs 23:17:01 policy-pap | ssl.key.password = null 23:17:01 policy-db-migrator | > upgrade 0100-pdp.sql 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.600679004Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:17:01 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.60134685Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=667.656µs 23:17:01 policy-pap | ssl.keystore.certificate.chain = null 23:17:01 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.607390118Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:17:01 policy-pap | ssl.keystore.key = null 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.608123455Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=733.147µs 23:17:01 policy-pap | ssl.keystore.location = null 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.613857949Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:17:01 policy-pap | ssl.keystore.password = null 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.617621465Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.763146ms 23:17:01 policy-pap | ssl.keystore.type = JKS 23:17:01 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.621117689Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:17:01 policy-pap | ssl.protocol = TLSv1.3 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.622623643Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.506964ms 23:17:01 policy-pap | ssl.provider = null 23:17:01 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:17:01 kafka | [2024-01-21 23:15:04,764] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.625923245Z level=info msg="Executing migration" id="create cache_data table" 23:17:01 policy-pap | ssl.secure.random.implementation = null 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.627119086Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.195752ms 23:17:01 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.633297855Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:17:01 policy-pap | ssl.truststore.certificates = null 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.633976461Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=679.276µs 23:17:01 policy-pap | ssl.truststore.location = null 23:17:01 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.638204082Z level=info msg="Executing migration" id="create short_url table v1" 23:17:01 policy-pap | ssl.truststore.password = null 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.63904384Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=839.778µs 23:17:01 policy-pap | ssl.truststore.type = JKS 23:17:01 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.642447302Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:17:01 policy-pap | transaction.timeout.ms = 60000 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.643582883Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.134251ms 23:17:01 policy-pap | transactional.id = null 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.651041505Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:17:01 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.651117305Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=77.101µs 23:17:01 policy-pap | 23:17:01 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.658129702Z level=info msg="Executing migration" id="delete alert_definition table" 23:17:01 policy-pap | [2024-01-21T23:15:03.913+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,765] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.658194013Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=64.471µs 23:17:01 policy-pap | [2024-01-21T23:15:03.924+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 23:17:01 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.661415963Z level=info msg="Executing migration" id="recreate alert_definition table" 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.924+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.661957569Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=541.396µs 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.924+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705878903924 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.670371799Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.926+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a089b753-ae7e-4ef2-9693-63ba0de08080, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.671324738Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=957.329µs 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.926+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:17:01 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.67675466Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.928+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.677542357Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=787.477µs 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.934+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:17:01 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.681064851Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.935+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.681119321Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=54.8µs 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.940+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.685710235Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.942+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.686635454Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=925.539µs 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.942+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.691012976Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.944+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:17:01 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.692034465Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.021499ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.698933981Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:17:01 kafka | [2024-01-21 23:15:04,766] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.945+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.700659188Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.725077ms 23:17:01 kafka | [2024-01-21 23:15:04,767] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.948+00:00|INFO|ServiceManager|main] Policy PAP started 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.709021618Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:17:01 kafka | [2024-01-21 23:15:04,767] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.944+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.710927226Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.904468ms 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:03.953+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.499 seconds (process running for 12.156) 23:17:01 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.715462509Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.397+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: -jrszSKtSKq5TnXDeh3xeA 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.722446206Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.982947ms 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.398+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:01 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.73025691Z level=info msg="Executing migration" id="drop alert_definition table" 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.398+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: -jrszSKtSKq5TnXDeh3xeA 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.73128966Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.03223ms 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.401+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: -jrszSKtSKq5TnXDeh3xeA 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.736237977Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.479+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.736327958Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=87.481µs 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.479+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Cluster ID: -jrszSKtSKq5TnXDeh3xeA 23:17:01 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.739714001Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.483+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.74072638Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.005839ms 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.485+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 23:17:01 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.74380001Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.504+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.745364855Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.563904ms 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.599+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.751476983Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:17:01 kafka | [2024-01-21 23:15:04,769] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:04.614+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.752503213Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.02558ms 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:17:01 policy-pap | [2024-01-21T23:15:04.717+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.756735753Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.756811594Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=76.031µs 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.760140765Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | JOIN pdpstatistics b 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.761543319Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.401754ms 23:17:01 policy-pap | [2024-01-21T23:15:04.729+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.765924691Z level=info msg="Executing migration" id="create alert_instance table" 23:17:01 policy-pap | [2024-01-21T23:15:05.354+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | SET a.id = b.id 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.767214663Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.289812ms 23:17:01 policy-pap | [2024-01-21T23:15:05.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.772835687Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:17:01 policy-pap | [2024-01-21T23:15:05.371+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] (Re-)joining group 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.774916177Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=2.08323ms 23:17:01 policy-pap | [2024-01-21T23:15:05.377+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.778326509Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:17:01 policy-pap | [2024-01-21T23:15:05.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Request joining group due to: need to re-join with the given member-id: consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.779215008Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=888.049µs 23:17:01 policy-pap | [2024-01-21T23:15:05.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.782706271Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:17:01 policy-pap | [2024-01-21T23:15:05.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.788229114Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.521783ms 23:17:01 policy-pap | [2024-01-21T23:15:05.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:05.421+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.79304894Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:05.421+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] (Re-)joining group 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.79413922Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.08967ms 23:17:01 policy-db-migrator | 23:17:01 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.797324851Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:17:01 policy-pap | [2024-01-21T23:15:08.447+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b', protocol='range'} 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,770] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.79832028Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=995.699µs 23:17:01 policy-pap | [2024-01-21T23:15:08.448+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Successfully joined group with generation Generation{generationId=1, memberId='consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd', protocol='range'} 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:17:01 kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.801573201Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:17:01 policy-pap | [2024-01-21T23:15:08.455+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Finished assignment for group at generation 1: {consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd=Assignment(partitions=[policy-pdp-pap-0])} 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.839635824Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=38.065973ms 23:17:01 policy-pap | [2024-01-21T23:15:08.455+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b=Assignment(partitions=[policy-pdp-pap-0])} 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.846286628Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:17:01 policy-pap | [2024-01-21T23:15:08.495+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Successfully synced group in generation Generation{generationId=1, memberId='consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd', protocol='range'} 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.8790478Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=32.756102ms 23:17:01 policy-pap | [2024-01-21T23:15:08.495+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:01 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:17:01 kafka | [2024-01-21 23:15:04,771] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.884358651Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:17:01 policy-pap | [2024-01-21T23:15:08.500+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Adding newly assigned partitions: policy-pdp-pap-0 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,771] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.885391831Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.03291ms 23:17:01 policy-pap | [2024-01-21T23:15:08.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b', protocol='range'} 23:17:01 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:17:01 kafka | [2024-01-21 23:15:04,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.892216906Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:17:01 policy-pap | [2024-01-21T23:15:08.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.893818391Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.600905ms 23:17:01 policy-pap | [2024-01-21T23:15:08.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.910844814Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:17:01 policy-pap | [2024-01-21T23:15:08.522+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.917454487Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.610783ms 23:17:01 policy-pap | [2024-01-21T23:15:08.528+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Found no committed offset for partition policy-pdp-pap-0 23:17:01 policy-db-migrator | > upgrade 0210-sequence.sql 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:08.544+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3, groupId=0096ba3d-86d0-4a50-8361-ec89b03a0194] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.921195973Z level=info msg="Executing migration" id="create alert_rule table" 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.922113712Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=917.729µs 23:17:01 policy-pap | [2024-01-21T23:15:08.546+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.927362612Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:17:01 policy-pap | [2024-01-21T23:15:09.304+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.928602713Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.234641ms 23:17:01 policy-pap | [2024-01-21T23:15:09.304+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.943237633Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:17:01 policy-pap | [2024-01-21T23:15:09.307+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.945858208Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=2.629705ms 23:17:01 policy-pap | [2024-01-21T23:15:25.299+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:17:01 policy-db-migrator | > upgrade 0220-sequence.sql 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.952647843Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:17:01 policy-pap | [] 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.953870774Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.222511ms 23:17:01 policy-pap | [2024-01-21T23:15:25.300+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:01 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.963314824Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"317323ab-8653-4275-bd16-05c52ce9a052","timestampMs":1705878925260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.963607597Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=298.383µs 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.304+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.970887347Z level=info msg="Executing migration" id="add column for to alert_rule" 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"317323ab-8653-4275-bd16-05c52ce9a052","timestampMs":1705878925260,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.976670402Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.778715ms 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.308+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:17:01 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.981743821Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.387+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.986228653Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.482432ms 23:17:01 kafka | [2024-01-21 23:15:04,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.388+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting listener 23:17:01 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.989129941Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.388+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting timer 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:32.9952672Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.136719ms 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.389+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=63abcfac-b36b-46ca-b5a5-4a747a0bd5bc, expireMs=1705878955389] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.00267722Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.391+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting enqueue 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.003580649Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=903.749µs 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.392+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate started 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.007625767Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:17:01 policy-pap | [2024-01-21T23:15:25.392+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=63abcfac-b36b-46ca-b5a5-4a747a0bd5bc, expireMs=1705878955389] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.008929399Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.299912ms 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.395+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.01220968Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.018214016Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.003676ms 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.444+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.024683747Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.032532101Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.849074ms 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.444+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.038263714Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:17:01 policy-pap | [2024-01-21T23:15:25.456+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.039231313Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=976.289µs 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.042583735Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.04852197Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.930475ms 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.456+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.054188724Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:17:01 kafka | [2024-01-21 23:15:04,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.469+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.060617214Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.43022ms 23:17:01 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"15fcabe4-fb3e-47f6-b4c1-43b4541365cb","timestampMs":1705878925454,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.065864183Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:17:01 policy-pap | [2024-01-21T23:15:25.471+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.065914674Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=51.051µs 23:17:01 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"15fcabe4-fb3e-47f6-b4c1-43b4541365cb","timestampMs":1705878925454,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup"} 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.069726459Z level=info msg="Executing migration" id="create alert_rule_version table" 23:17:01 policy-pap | [2024-01-21T23:15:25.474+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.070672918Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=945.029µs 23:17:01 policy-pap | [2024-01-21T23:15:25.478+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c3d46f28-2b6f-4c92-8ce2-04ffb23d1149","timestampMs":1705878925459,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.076409212Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.077732824Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.327452ms 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping enqueue 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.087772999Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:17:01 policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping timer 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.089150272Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.378042ms 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=63abcfac-b36b-46ca-b5a5-4a747a0bd5bc, expireMs=1705878955389] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.093160829Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:17:01 policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping listener 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.09326948Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=108.981µs 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopped 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.101610608Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate successful 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.109934567Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.329208ms 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 start publishing next request 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.115402618Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:17:01 policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange starting 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.121667687Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.265039ms 23:17:01 kafka | [2024-01-21 23:15:04,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange starting listener 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.127559442Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:17:01 kafka | [2024-01-21 23:15:04,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:17:01 policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange starting timer 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.13375947Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.201588ms 23:17:01 kafka | [2024-01-21 23:15:04,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.496+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=a10cd6bc-dc68-4d18-bc08-45c43b208d80, expireMs=1705878955496] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.140672705Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange starting enqueue 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.145173357Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.497372ms 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange started 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.148190105Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:17:01 policy-pap | [2024-01-21T23:15:25.497+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=a10cd6bc-dc68-4d18-bc08-45c43b208d80, expireMs=1705878955496] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.152597967Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.407302ms 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.498+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.156510653Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.156593244Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=83.521µs 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.507+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.162160066Z level=info msg="Executing migration" id=create_alert_configuration_table 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.162803222Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=645.766µs 23:17:01 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63abcfac-b36b-46ca-b5a5-4a747a0bd5bc","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c3d46f28-2b6f-4c92-8ce2-04ffb23d1149","timestampMs":1705878925459,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.168273714Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:17:01 policy-pap | [2024-01-21T23:15:25.508+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 63abcfac-b36b-46ca-b5a5-4a747a0bd5bc 23:17:01 kafka | [2024-01-21 23:15:04,776] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.172845186Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.571072ms 23:17:01 policy-pap | [2024-01-21T23:15:25.513+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:01 kafka | [2024-01-21 23:15:04,777] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.178928554Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:17:01 policy-db-migrator | 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 kafka | [2024-01-21 23:15:04,777] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.179046045Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=122.721µs 23:17:01 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:17:01 policy-pap | [2024-01-21T23:15:25.513+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:17:01 kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.183442256Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.518+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:01 kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.189010558Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.566532ms 23:17:01 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:17:01 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"52c4938e-a200-46d0-81f3-21a9a4d3de9b","timestampMs":1705878925511,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.193257918Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.519+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a10cd6bc-dc68-4d18-bc08-45c43b208d80 23:17:01 kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.194028145Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=769.547µs 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.524+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:01 kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.197566749Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","timestampMs":1705878925371,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.202287433Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.718854ms 23:17:01 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:17:01 kafka | [2024-01-21 23:15:04,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.20838822Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.209314739Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=930.839µs 23:17:01 policy-pap | [2024-01-21T23:15:25.524+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,780] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.214644349Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:17:01 policy-pap | [2024-01-21T23:15:25.526+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.215934921Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.293352ms 23:17:01 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a10cd6bc-dc68-4d18-bc08-45c43b208d80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"52c4938e-a200-46d0-81f3-21a9a4d3de9b","timestampMs":1705878925511,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.222704544Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:17:01 policy-pap | [2024-01-21T23:15:25.526+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopping 23:17:01 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.227914223Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=5.212359ms 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopping enqueue 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.231491277Z level=info msg="Executing migration" id="create provenance_type table" 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopping timer 23:17:01 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.232083602Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=591.185µs 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=a10cd6bc-dc68-4d18-bc08-45c43b208d80, expireMs=1705878955496] 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.236473973Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopping listener 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.237313701Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=840.498µs 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange stopped 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.243804292Z level=info msg="Executing migration" id="create alert_image table" 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpStateChange successful 23:17:01 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:17:01 kafka | [2024-01-21 23:15:04,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.245257806Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.454264ms 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 start publishing next request 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.252272122Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.253590464Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.322983ms 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting listener 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.257246358Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting timer 23:17:01 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.257343729Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=98.641µs 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=04a1ac11-bc72-4cab-ab24-e9132afd087a, expireMs=1705878955527] 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.26069086Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate starting enqueue 23:17:01 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.26166953Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=978.63µs 23:17:01 policy-pap | [2024-01-21T23:15:25.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate started 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.528+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.267157061Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"04a1ac11-bc72-4cab-ab24-e9132afd087a","timestampMs":1705878925515,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.268260962Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.104281ms 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.534+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:01 policy-db-migrator | > upgrade 0100-upgrade.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.27343014Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"04a1ac11-bc72-4cab-ab24-e9132afd087a","timestampMs":1705878925515,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.273886244Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.534+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:17:01 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.276749761Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.541+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.277279376Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=529.685µs 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:17:01 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"04a1ac11-bc72-4cab-ab24-e9132afd087a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2ab7fd29-16ad-4d9b-982a-342f9d03040b","timestampMs":1705878925536,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.282342554Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.542+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 04a1ac11-bc72-4cab-ab24-e9132afd087a 23:17:01 policy-db-migrator | msg 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.283150081Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=807.617µs 23:17:01 kafka | [2024-01-21 23:15:04,792] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.543+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:01 policy-db-migrator | upgrade to 1100 completed 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.289042466Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:17:01 policy-pap | {"source":"pap-525feee6-7963-49fa-bcec-787a72551e23","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"04a1ac11-bc72-4cab-ab24-e9132afd087a","timestampMs":1705878925515,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.299432084Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=10.388968ms 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.543+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:17:01 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.302740855Z level=info msg="Executing migration" id="create library_element table v1" 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.546+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.303637633Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=896.788µs 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:17:01 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:17:01 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"04a1ac11-bc72-4cab-ab24-e9132afd087a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2ab7fd29-16ad-4d9b-982a-342f9d03040b","timestampMs":1705878925536,"name":"apex-6bd48436-2333-4034-833d-9cd0ef0573c6","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.308923613Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.310148994Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.225481ms 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping enqueue 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.313495626Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.314358754Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=862.238µs 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping timer 23:17:01 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.317974598Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=04a1ac11-bc72-4cab-ab24-e9132afd087a, expireMs=1705878955527] 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.319789255Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.813837ms 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopping listener 23:17:01 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.326089394Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.547+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate stopped 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.327220914Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.13136ms 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.551+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 PdpUpdate successful 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.331526135Z level=info msg="Executing migration" id="increase max description length to 2048" 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:25.551+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6bd48436-2333-4034-833d-9cd0ef0573c6 has no more requests 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.331612276Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=88.031µs 23:17:01 policy-pap | [2024-01-21T23:15:29.952+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.3352586Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:17:01 policy-pap | [2024-01-21T23:15:29.960+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:17:01 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.335396181Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=138.351µs 23:17:01 policy-pap | [2024-01-21T23:15:30.395+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.340869493Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:17:01 policy-pap | [2024-01-21T23:15:30.988+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.34161139Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=742.167µs 23:17:01 policy-pap | [2024-01-21T23:15:30.989+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.346580416Z level=info msg="Executing migration" id="create data_keys table" 23:17:01 policy-pap | [2024-01-21T23:15:31.524+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup 23:17:01 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:17:01 kafka | [2024-01-21 23:15:04,793] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.34803764Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.456794ms 23:17:01 policy-pap | [2024-01-21T23:15:31.738+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.354560001Z level=info msg="Executing migration" id="create secrets table" 23:17:01 policy-pap | [2024-01-21T23:15:31.853+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:01 kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.355628391Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.0678ms 23:17:01 policy-pap | [2024-01-21T23:15:31.853+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.363746807Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:17:01 policy-pap | [2024-01-21T23:15:31.854+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.414091749Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=50.342592ms 23:17:01 policy-pap | [2024-01-21T23:15:31.869+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-21T23:15:31Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-21T23:15:31Z, user=policyadmin)] 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.417412151Z level=info msg="Executing migration" id="add name column into data_keys" 23:17:01 policy-pap | [2024-01-21T23:15:32.623+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 23:17:01 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:17:01 kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.424591788Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.178228ms 23:17:01 policy-pap | [2024-01-21T23:15:32.624+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,794] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.42907785Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:17:01 policy-pap | [2024-01-21T23:15:32.624+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,795] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.429228811Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=148.611µs 23:17:01 policy-pap | [2024-01-21T23:15:32.624+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,795] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.435933544Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:17:01 policy-pap | [2024-01-21T23:15:32.624+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 23:17:01 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:17:01 kafka | [2024-01-21 23:15:04,800] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.483307809Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=47.375145ms 23:17:01 policy-pap | [2024-01-21T23:15:32.634+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-21T23:15:32Z, user=policyadmin)] 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,801] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.48878118Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:17:01 policy-pap | [2024-01-21T23:15:33.000+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 23:17:01 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:01 kafka | [2024-01-21 23:15:04,802] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.534040514Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=45.258764ms 23:17:01 policy-pap | [2024-01-21T23:15:33.000+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,802] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.537215174Z level=info msg="Executing migration" id="create kv_store table v1" 23:17:01 policy-pap | [2024-01-21T23:15:33.000+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,802] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.537846Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=630.836µs 23:17:01 policy-pap | [2024-01-21T23:15:33.001+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:17:01 policy-db-migrator | -------------- 23:17:01 kafka | [2024-01-21 23:15:04,815] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.541569495Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:17:01 policy-pap | [2024-01-21T23:15:33.001+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:17:01 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:17:01 kafka | [2024-01-21 23:15:04,816] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.542745306Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.173441ms 23:17:01 policy-pap | [2024-01-21T23:15:33.001+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.547693503Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:17:01 kafka | [2024-01-21 23:15:04,816] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:17:01 policy-pap | [2024-01-21T23:15:33.012+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-21T23:15:33Z, user=policyadmin)] 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.548015866Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=323.923µs 23:17:01 kafka | [2024-01-21 23:15:04,816] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-pap | [2024-01-21T23:15:53.594+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.551909292Z level=info msg="Executing migration" id="create permission table" 23:17:01 kafka | [2024-01-21 23:15:04,816] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-pap | [2024-01-21T23:15:53.595+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:17:01 policy-db-migrator | TRUNCATE TABLE sequence 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.55274554Z level=info msg="Migration successfully executed" id="create permission table" duration=836.118µs 23:17:01 kafka | [2024-01-21 23:15:04,825] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-pap | [2024-01-21T23:15:55.389+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=63abcfac-b36b-46ca-b5a5-4a747a0bd5bc, expireMs=1705878955389] 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.562534962Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:17:01 kafka | [2024-01-21 23:15:04,826] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-pap | [2024-01-21T23:15:55.496+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=a10cd6bc-dc68-4d18-bc08-45c43b208d80, expireMs=1705878955496] 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,826] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.564257508Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.722086ms 23:17:01 policy-db-migrator | 23:17:01 kafka | [2024-01-21 23:15:04,826] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.570165823Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:17:01 kafka | [2024-01-21 23:15:04,826] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.571329564Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.163641ms 23:17:01 kafka | [2024-01-21 23:15:04,838] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.574594345Z level=info msg="Executing migration" id="create role table" 23:17:01 kafka | [2024-01-21 23:15:04,839] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.575436823Z level=info msg="Migration successfully executed" id="create role table" duration=839.888µs 23:17:01 kafka | [2024-01-21 23:15:04,839] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.581973124Z level=info msg="Executing migration" id="add column display_name" 23:17:01 kafka | [2024-01-21 23:15:04,840] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.589948969Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.975635ms 23:17:01 kafka | [2024-01-21 23:15:04,840] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | DROP TABLE pdpstatistics 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.597336268Z level=info msg="Executing migration" id="add column group_name" 23:17:01 kafka | [2024-01-21 23:15:04,848] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.604409734Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.073446ms 23:17:01 kafka | [2024-01-21 23:15:04,849] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.607650955Z level=info msg="Executing migration" id="add index role.org_id" 23:17:01 kafka | [2024-01-21 23:15:04,849] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.608441112Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=789.487µs 23:17:01 kafka | [2024-01-21 23:15:04,849] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.611737013Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:17:01 kafka | [2024-01-21 23:15:04,849] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.612920854Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.183221ms 23:17:01 kafka | [2024-01-21 23:15:04,862] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.618536367Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:17:01 kafka | [2024-01-21 23:15:04,863] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.619739218Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.202041ms 23:17:01 kafka | [2024-01-21 23:15:04,863] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.623708985Z level=info msg="Executing migration" id="create team role table" 23:17:01 kafka | [2024-01-21 23:15:04,864] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.624531493Z level=info msg="Migration successfully executed" id="create team role table" duration=820.538µs 23:17:01 kafka | [2024-01-21 23:15:04,864] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.628349909Z level=info msg="Executing migration" id="add index team_role.org_id" 23:17:01 kafka | [2024-01-21 23:15:04,875] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.629692522Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.341553ms 23:17:01 kafka | [2024-01-21 23:15:04,876] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | DROP TABLE statistics_sequence 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.635261664Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:17:01 kafka | [2024-01-21 23:15:04,876] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | -------------- 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.637210642Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.948298ms 23:17:01 kafka | [2024-01-21 23:15:04,876] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.64340395Z level=info msg="Executing migration" id="add index team_role.team_id" 23:17:01 kafka | [2024-01-21 23:15:04,877] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.644577321Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.174321ms 23:17:01 kafka | [2024-01-21 23:15:04,886] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | name version 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.648426657Z level=info msg="Executing migration" id="create user role table" 23:17:01 kafka | [2024-01-21 23:15:04,887] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | policyadmin 1300 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.649403097Z level=info msg="Migration successfully executed" id="create user role table" duration=975.709µs 23:17:01 kafka | [2024-01-21 23:15:04,887] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.654214212Z level=info msg="Executing migration" id="add index user_role.org_id" 23:17:01 kafka | [2024-01-21 23:15:04,888] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.655506274Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.291333ms 23:17:01 kafka | [2024-01-21 23:15:04,888] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.65935692Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:17:01 kafka | [2024-01-21 23:15:04,896] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.66146735Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.10797ms 23:17:01 kafka | [2024-01-21 23:15:04,897] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.666999861Z level=info msg="Executing migration" id="add index user_role.user_id" 23:17:01 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,897] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.668295664Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.294623ms 23:17:01 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,897] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.676868804Z level=info msg="Executing migration" id="create builtin role table" 23:17:01 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,897] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.678148256Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.279082ms 23:17:01 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,906] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.683829369Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:17:01 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,907] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.685612006Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.781817ms 23:17:01 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,907] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.689651734Z level=info msg="Executing migration" id="add index builtin_role.name" 23:17:01 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,907] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.691442421Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.789527ms 23:17:01 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,908] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.695391318Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:17:01 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,913] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.703421063Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.028835ms 23:17:01 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,914] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.709810143Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:17:01 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,914] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.711317087Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.506564ms 23:17:01 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,914] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.718491644Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:17:01 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,914] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.720276321Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.784077ms 23:17:01 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,925] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.723971426Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:17:01 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,925] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.725668552Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.696586ms 23:17:01 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,925] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.731693848Z level=info msg="Executing migration" id="add unique index role.uid" 23:17:01 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:32 23:17:01 kafka | [2024-01-21 23:15:04,925] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.732836999Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.143021ms 23:17:01 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,928] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.736489483Z level=info msg="Executing migration" id="create seed assignment table" 23:17:01 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,941] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.737665614Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.174301ms 23:17:01 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,942] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.742488539Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:17:01 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,942] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.744487958Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.999179ms 23:17:01 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,942] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.752364432Z level=info msg="Executing migration" id="add column hidden to role table" 23:17:01 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,942] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.76386236Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.494608ms 23:17:01 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,952] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.767552155Z level=info msg="Executing migration" id="permission kind migration" 23:17:01 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,953] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.774940994Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.387899ms 23:17:01 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,954] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:04,954] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.77989083Z level=info msg="Executing migration" id="permission attribute migration" 23:17:01 kafka | [2024-01-21 23:15:04,954] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.787751874Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.859994ms 23:17:01 kafka | [2024-01-21 23:15:04,962] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.795438866Z level=info msg="Executing migration" id="permission identifier migration" 23:17:01 kafka | [2024-01-21 23:15:04,963] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.803170489Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.731053ms 23:17:01 kafka | [2024-01-21 23:15:04,963] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.806700872Z level=info msg="Executing migration" id="add permission identifier index" 23:17:01 kafka | [2024-01-21 23:15:04,963] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.807509419Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=808.027µs 23:17:01 kafka | [2024-01-21 23:15:04,963] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.810895531Z level=info msg="Executing migration" id="create query_history table v1" 23:17:01 kafka | [2024-01-21 23:15:04,974] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.811546667Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=650.896µs 23:17:01 kafka | [2024-01-21 23:15:04,975] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.820203989Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:17:01 kafka | [2024-01-21 23:15:04,975] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.822692972Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.491534ms 23:17:01 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,975] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.827754889Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:17:01 kafka | [2024-01-21 23:15:04,975] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.828101742Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=347.863µs 23:17:01 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,986] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.833926987Z level=info msg="Executing migration" id="rbac disabled migrator" 23:17:01 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,987] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.834014598Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=88.421µs 23:17:01 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,988] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.84168023Z level=info msg="Executing migration" id="teams permissions migration" 23:17:01 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,988] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.842678459Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=998.089µs 23:17:01 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,988] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.847258392Z level=info msg="Executing migration" id="dashboard permissions" 23:17:01 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:33 23:17:01 kafka | [2024-01-21 23:15:04,995] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.848245682Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=988.04µs 23:17:01 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:04,996] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.852301979Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:17:01 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:04,996] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.85339098Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.088901ms 23:17:01 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:04,996] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.860091793Z level=info msg="Executing migration" id="drop managed folder create actions" 23:17:01 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:04,996] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.860635938Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=547.795µs 23:17:01 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,008] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.86517873Z level=info msg="Executing migration" id="alerting notification permissions" 23:17:01 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,009] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.865563664Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=384.984µs 23:17:01 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,009] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.872162996Z level=info msg="Executing migration" id="create query_history_star table v1" 23:17:01 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,009] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.873556859Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.391763ms 23:17:01 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,009] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.88321892Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:17:01 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,016] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.884996426Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.776876ms 23:17:01 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,017] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.890103214Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:17:01 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,017] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.898484643Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.372789ms 23:17:01 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,017] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.901779114Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:17:01 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,018] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.901920195Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=140.431µs 23:17:01 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,025] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.905099795Z level=info msg="Executing migration" id="create correlation table v1" 23:17:01 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,026] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.905982473Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=882.138µs 23:17:01 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,026] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.910368434Z level=info msg="Executing migration" id="add index correlations.uid" 23:17:01 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,026] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.911490265Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.121401ms 23:17:01 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,026] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.917592292Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:17:01 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,032] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.920510019Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.919087ms 23:17:01 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,033] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.924375306Z level=info msg="Executing migration" id="add correlation config column" 23:17:01 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,033] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.933746703Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.370667ms 23:17:01 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,033] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.939890241Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:17:01 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,033] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.941852879Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.962898ms 23:17:01 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,040] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.945941508Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:17:01 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:34 23:17:01 kafka | [2024-01-21 23:15:05,040] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.947611834Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.666276ms 23:17:01 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 kafka | [2024-01-21 23:15:05,040] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.953111435Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:17:01 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.98353201Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=30.419905ms 23:17:01 kafka | [2024-01-21 23:15:05,041] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.988318245Z level=info msg="Executing migration" id="create correlation v2" 23:17:01 kafka | [2024-01-21 23:15:05,041] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.989019692Z level=info msg="Migration successfully executed" id="create correlation v2" duration=701.247µs 23:17:01 kafka | [2024-01-21 23:15:05,047] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.992942289Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:17:01 kafka | [2024-01-21 23:15:05,048] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.99413138Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.188261ms 23:17:01 kafka | [2024-01-21 23:15:05,048] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:33.999381639Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:17:01 kafka | [2024-01-21 23:15:05,048] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.001222766Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.840767ms 23:17:01 kafka | [2024-01-21 23:15:05,048] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.005654104Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:17:01 kafka | [2024-01-21 23:15:05,055] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.00693246Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.278746ms 23:17:01 kafka | [2024-01-21 23:15:05,056] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.012251782Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:17:01 kafka | [2024-01-21 23:15:05,056] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.012544646Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=289.664µs 23:17:01 kafka | [2024-01-21 23:15:05,056] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.01703182Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:17:01 kafka | [2024-01-21 23:15:05,057] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.018258474Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.226344ms 23:17:01 kafka | [2024-01-21 23:15:05,063] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.021684154Z level=info msg="Executing migration" id="add provisioning column" 23:17:01 kafka | [2024-01-21 23:15:05,064] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.032256158Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.573284ms 23:17:01 kafka | [2024-01-21 23:15:05,064] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.036901712Z level=info msg="Executing migration" id="create entity_events table" 23:17:01 kafka | [2024-01-21 23:15:05,064] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.037446429Z level=info msg="Migration successfully executed" id="create entity_events table" duration=543.897µs 23:17:01 kafka | [2024-01-21 23:15:05,064] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:35 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.04185266Z level=info msg="Executing migration" id="create dashboard public config v1" 23:17:01 kafka | [2024-01-21 23:15:05,071] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.042764471Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=911.031µs 23:17:01 kafka | [2024-01-21 23:15:05,072] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.046349763Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:17:01 kafka | [2024-01-21 23:15:05,072] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.047049341Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:17:01 kafka | [2024-01-21 23:15:05,072] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2101242314320800u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.050983347Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:01 kafka | [2024-01-21 23:15:05,072] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.051715495Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:01 kafka | [2024-01-21 23:15:05,078] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.056121947Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:17:01 kafka | [2024-01-21 23:15:05,079] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.056889876Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=767.179µs 23:17:01 kafka | [2024-01-21 23:15:05,079] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.060852922Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:17:01 kafka | [2024-01-21 23:15:05,079] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.061750542Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=896.85µs 23:17:01 kafka | [2024-01-21 23:15:05,079] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.065875821Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:17:01 kafka | [2024-01-21 23:15:05,086] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.066959713Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.083092ms 23:17:01 kafka | [2024-01-21 23:15:05,087] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.073290207Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:01 kafka | [2024-01-21 23:15:05,087] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.075038088Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.746541ms 23:17:01 kafka | [2024-01-21 23:15:05,087] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.08375729Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:17:01 kafka | [2024-01-21 23:15:05,087] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.084820262Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.062962ms 23:17:01 kafka | [2024-01-21 23:15:05,093] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.088659757Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:17:01 kafka | [2024-01-21 23:15:05,094] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2101242314320900u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.090318306Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.658489ms 23:17:01 kafka | [2024-01-21 23:15:05,094] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:17:01 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:36 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.094812799Z level=info msg="Executing migration" id="Drop public config table" 23:17:01 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:36 23:17:01 kafka | [2024-01-21 23:15:05,094] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.096090514Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.274105ms 23:17:01 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:36 23:17:01 kafka | [2024-01-21 23:15:05,094] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.100081091Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:17:01 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:36 23:17:01 kafka | [2024-01-21 23:15:05,100] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.101085423Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.003622ms 23:17:01 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,100] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.105830899Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:17:01 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,101] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.107523659Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.69074ms 23:17:01 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,101] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.113407748Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:17:01 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,101] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.115170239Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.762331ms 23:17:01 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2101242314321000u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,108] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.119992766Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:17:01 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2101242314321100u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,108] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.121146999Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.156763ms 23:17:01 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2101242314321200u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,109] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.125798294Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:17:01 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2101242314321200u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,109] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.157730401Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=31.930907ms 23:17:01 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2101242314321200u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,109] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.1611015Z level=info msg="Executing migration" id="add annotations_enabled column" 23:17:01 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2101242314321200u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,115] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.169288537Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.182917ms 23:17:01 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2101242314321300u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,116] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.174346006Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:17:01 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2101242314321300u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,116] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.181890845Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.542029ms 23:17:01 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2101242314321300u 1 2024-01-21 23:14:37 23:17:01 kafka | [2024-01-21 23:15:05,116] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.186764713Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:17:01 policy-db-migrator | policyadmin: OK @ 1300 23:17:01 kafka | [2024-01-21 23:15:05,116] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.187176208Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=411.925µs 23:17:01 kafka | [2024-01-21 23:15:05,128] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.192254348Z level=info msg="Executing migration" id="add share column" 23:17:01 kafka | [2024-01-21 23:15:05,129] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.204708855Z level=info msg="Migration successfully executed" id="add share column" duration=12.461137ms 23:17:01 kafka | [2024-01-21 23:15:05,129] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.211742757Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:17:01 kafka | [2024-01-21 23:15:05,129] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.211943699Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=199.762µs 23:17:01 kafka | [2024-01-21 23:15:05,129] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,137] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,138] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.216415102Z level=info msg="Executing migration" id="create file table" 23:17:01 kafka | [2024-01-21 23:15:05,138] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.217275582Z level=info msg="Migration successfully executed" id="create file table" duration=859.85µs 23:17:01 kafka | [2024-01-21 23:15:05,138] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.221190229Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:17:01 kafka | [2024-01-21 23:15:05,138] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.222952369Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.76126ms 23:17:01 kafka | [2024-01-21 23:15:05,151] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.226929556Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:17:01 kafka | [2024-01-21 23:15:05,152] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.22893039Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.001314ms 23:17:01 kafka | [2024-01-21 23:15:05,152] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.233866508Z level=info msg="Executing migration" id="create file_meta table" 23:17:01 kafka | [2024-01-21 23:15:05,152] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.234970591Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.104083ms 23:17:01 kafka | [2024-01-21 23:15:05,152] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.239262832Z level=info msg="Executing migration" id="file table idx: path key" 23:17:01 kafka | [2024-01-21 23:15:05,160] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.242287847Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=3.025006ms 23:17:01 kafka | [2024-01-21 23:15:05,160] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.254467611Z level=info msg="Executing migration" id="set path collation in file table" 23:17:01 kafka | [2024-01-21 23:15:05,160] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.254548872Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=77.641µs 23:17:01 kafka | [2024-01-21 23:15:05,161] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.259720403Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.259833734Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=114.301µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.264149565Z level=info msg="Executing migration" id="managed permissions migration" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.265041706Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=892.341µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.26879684Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.269136194Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=339.004µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.273460075Z level=info msg="Executing migration" id="RBAC action name migrator" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.275075504Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.613659ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.280599629Z level=info msg="Executing migration" id="Add UID column to playlist" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.290818179Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.21923ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.294913628Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.295358183Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=448.475µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.300004238Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.302167133Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.162475ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.305813246Z level=info msg="Executing migration" id="update group index for alert rules" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.306456103Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=648.157µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.309804063Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.310131267Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=326.434µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.315905295Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.316523012Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=617.797µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.321618962Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.332995237Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.376375ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.336193084Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.34520625Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.007876ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.348703002Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.349490081Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=784.229µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.354357248Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.466060185Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=111.690297ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.469586806Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.470578728Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=991.082µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.475077221Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.476889692Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.811301ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.482182855Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.520465186Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=38.283031ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.529593924Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.529804386Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=210.202µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.533321838Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.533670372Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=348.704µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.537547057Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:17:01 kafka | [2024-01-21 23:15:05,161] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,168] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,169] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,169] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,169] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,169] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,179] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,179] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,180] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,180] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,180] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,189] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,190] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,190] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,190] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,190] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,196] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,197] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,197] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,197] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,197] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,202] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,203] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,203] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,203] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,205] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,213] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,214] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,214] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,214] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,214] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,220] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,220] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,221] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,221] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,221] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,229] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,230] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,230] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,230] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,230] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,237] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,237] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,237] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,237] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,237] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,244] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,244] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,244] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,244] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,245] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,255] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,256] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,256] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,256] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,256] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,268] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,268] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,269] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,269] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,269] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,277] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:01 kafka | [2024-01-21 23:15:05,278] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:01 kafka | [2024-01-21 23:15:05,278] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,278] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:17:01 kafka | [2024-01-21 23:15:05,278] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(d61QdiLrRDGfXeRddxpvYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.537882781Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=335.454µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.542765739Z level=info msg="Executing migration" id="create folder table" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.544191496Z level=info msg="Migration successfully executed" id="create folder table" duration=1.425097ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.548411216Z level=info msg="Executing migration" id="Add index for parent_uid" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.550267817Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.855891ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.554494067Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.55638255Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.887693ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.561072685Z level=info msg="Executing migration" id="Update folder title length" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.561103485Z level=info msg="Migration successfully executed" id="Update folder title length" duration=34.17µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.565886642Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.567867925Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.980353ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.572784363Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.574530733Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.74604ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.579390131Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.580583185Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.192154ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.585415912Z level=info msg="Executing migration" id="create anon_device table" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.586783008Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.366946ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.590775705Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.592261963Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.486568ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.596920897Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.598076171Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.154714ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.604732159Z level=info msg="Executing migration" id="create signing_key table" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.605547829Z level=info msg="Migration successfully executed" id="create signing_key table" duration=814.35µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.615324554Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.617202336Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.876882ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.621467927Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.623363049Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.894522ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.627305146Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.62763114Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=309.973µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.631682797Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.641192239Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.508362ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.648617096Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.649166622Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=551.226µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.657816024Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.659747347Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.930903ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.66341234Z level=info msg="Executing migration" id="create sso_setting table" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.664365612Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=952.012µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.670782397Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.672034362Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.252745ms 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.676811158Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.677291564Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=481.166µs 23:17:01 grafana | logger=migrator t=2024-01-21T23:14:34.68118406Z level=info msg="migrations completed" performed=523 skipped=0 duration=3.941321669s 23:17:01 grafana | logger=sqlstore t=2024-01-21T23:14:34.690171066Z level=info msg="Created default admin" user=admin 23:17:01 grafana | logger=sqlstore t=2024-01-21T23:14:34.690485769Z level=info msg="Created default organization" 23:17:01 grafana | logger=secrets t=2024-01-21T23:14:34.698287981Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:17:01 grafana | logger=plugin.store t=2024-01-21T23:14:34.721728068Z level=info msg="Loading plugins..." 23:17:01 grafana | logger=local.finder t=2024-01-21T23:14:34.759352701Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:17:01 grafana | logger=plugin.store t=2024-01-21T23:14:34.759419502Z level=info msg="Plugins loaded" count=55 duration=37.693484ms 23:17:01 grafana | logger=query_data t=2024-01-21T23:14:34.762975834Z level=info msg="Query Service initialization" 23:17:01 grafana | logger=live.push_http t=2024-01-21T23:14:34.771582425Z level=info msg="Live Push Gateway initialization" 23:17:01 grafana | logger=ngalert.migration t=2024-01-21T23:14:34.77881233Z level=info msg=Starting 23:17:01 grafana | logger=ngalert.migration orgID=1 t=2024-01-21T23:14:34.779773602Z level=info msg="Migrating alerts for organisation" 23:17:01 grafana | logger=ngalert.migration orgID=1 t=2024-01-21T23:14:34.780207507Z level=info msg="Alerts found to migrate" alerts=0 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,282] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,283] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,284] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:17:01 grafana | logger=ngalert.migration orgID=1 t=2024-01-21T23:14:34.780663332Z level=warn msg="No available receivers" 23:17:01 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-21T23:14:34.783526346Z level=info msg="Completed legacy migration" 23:17:01 grafana | logger=infra.usagestats.collector t=2024-01-21T23:14:34.813050694Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:17:01 grafana | logger=provisioning.datasources t=2024-01-21T23:14:34.815672725Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:17:01 grafana | logger=provisioning.alerting t=2024-01-21T23:14:34.831145627Z level=info msg="starting to provision alerting" 23:17:01 grafana | logger=provisioning.alerting t=2024-01-21T23:14:34.831171237Z level=info msg="finished to provision alerting" 23:17:01 grafana | logger=grafanaStorageLogger t=2024-01-21T23:14:34.83142378Z level=info msg="Storage starting" 23:17:01 grafana | logger=ngalert.state.manager t=2024-01-21T23:14:34.832702315Z level=info msg="Warming state cache for startup" 23:17:01 grafana | logger=ngalert.state.manager t=2024-01-21T23:14:34.83305585Z level=info msg="State cache has been initialized" states=0 duration=383.805µs 23:17:01 grafana | logger=ngalert.scheduler t=2024-01-21T23:14:34.83309153Z level=info msg="Starting scheduler" tickInterval=10s 23:17:01 grafana | logger=ticker t=2024-01-21T23:14:34.83313254Z level=info msg=starting first_tick=2024-01-21T23:14:40Z 23:17:01 grafana | logger=ngalert.multiorg.alertmanager t=2024-01-21T23:14:34.833147941Z level=info msg="Starting MultiOrg Alertmanager" 23:17:01 grafana | logger=http.server t=2024-01-21T23:14:34.836576961Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:17:01 grafana | logger=plugins.update.checker t=2024-01-21T23:14:34.937647182Z level=info msg="Update check succeeded" duration=104.455431ms 23:17:01 grafana | logger=sqlstore.transactions t=2024-01-21T23:14:34.964382327Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:17:01 grafana | logger=sqlstore.transactions t=2024-01-21T23:14:34.975559949Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:17:01 grafana | logger=grafana.update.checker t=2024-01-21T23:14:36.361700661Z level=info msg="Update check succeeded" duration=1.529970587s 23:17:01 grafana | logger=infra.usagestats t=2024-01-21T23:15:33.847169346Z level=info msg="Usage stats are ready to report" 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,285] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,287] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,288] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,290] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,291] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,292] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,293] INFO [Broker id=1] Finished LeaderAndIsr request in 521ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,295] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=d61QdiLrRDGfXeRddxpvYw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,304] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,304] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,304] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,305] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,306] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,306] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,306] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,308] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,308] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,308] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,309] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,309] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,311] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,311] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,311] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,313] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,313] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,313] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 22 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,314] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,314] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,315] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:01 kafka | [2024-01-21 23:15:05,315] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,315] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,324] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,326] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:01 kafka | [2024-01-21 23:15:05,413] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 0096ba3d-86d0-4a50-8361-ec89b03a0194 in Empty state. Created a new member id consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,413] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,430] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,433] INFO [GroupCoordinator 1]: Preparing to rebalance group 0096ba3d-86d0-4a50-8361-ec89b03a0194 in state PreparingRebalance with old generation 0 (__consumer_offsets-42) (reason: Adding new member consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,663] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group e43a1262-c2bd-4185-8b6c-0623a45ad046 in Empty state. Created a new member id consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:05,666] INFO [GroupCoordinator 1]: Preparing to rebalance group e43a1262-c2bd-4185-8b6c-0623a45ad046 in state PreparingRebalance with old generation 0 (__consumer_offsets-44) (reason: Adding new member consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:08,442] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:08,447] INFO [GroupCoordinator 1]: Stabilized group 0096ba3d-86d0-4a50-8361-ec89b03a0194 generation 1 (__consumer_offsets-42) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:08,463] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-e1f50569-bb82-4f7f-b4d4-41530694940b for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:08,467] INFO [GroupCoordinator 1]: Assignment received from leader consumer-0096ba3d-86d0-4a50-8361-ec89b03a0194-3-bf2e3bef-b43e-44f1-a9e0-15046cd4afdd for group 0096ba3d-86d0-4a50-8361-ec89b03a0194 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:08,667] INFO [GroupCoordinator 1]: Stabilized group e43a1262-c2bd-4185-8b6c-0623a45ad046 generation 1 (__consumer_offsets-44) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:01 kafka | [2024-01-21 23:15:08,684] INFO [GroupCoordinator 1]: Assignment received from leader consumer-e43a1262-c2bd-4185-8b6c-0623a45ad046-2-68ecb9d9-6955-4d56-8582-63ba0008f63b for group e43a1262-c2bd-4185-8b6c-0623a45ad046 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:01 ++ echo 'Tearing down containers...' 23:17:01 Tearing down containers... 23:17:01 ++ docker-compose down -v --remove-orphans 23:17:01 Stopping policy-apex-pdp ... 23:17:01 Stopping policy-pap ... 23:17:01 Stopping grafana ... 23:17:01 Stopping policy-api ... 23:17:01 Stopping kafka ... 23:17:01 Stopping prometheus ... 23:17:01 Stopping compose_zookeeper_1 ... 23:17:01 Stopping simulator ... 23:17:01 Stopping mariadb ... 23:17:02 Stopping grafana ... done 23:17:02 Stopping prometheus ... done 23:17:11 Stopping policy-apex-pdp ... done 23:17:22 Stopping simulator ... done 23:17:22 Stopping policy-pap ... done 23:17:23 Stopping mariadb ... done 23:17:23 Stopping kafka ... done 23:17:23 Stopping compose_zookeeper_1 ... done 23:17:32 Stopping policy-api ... done 23:17:32 Removing policy-apex-pdp ... 23:17:32 Removing policy-pap ... 23:17:32 Removing grafana ... 23:17:32 Removing policy-api ... 23:17:32 Removing kafka ... 23:17:32 Removing policy-db-migrator ... 23:17:32 Removing prometheus ... 23:17:32 Removing compose_zookeeper_1 ... 23:17:32 Removing simulator ... 23:17:32 Removing mariadb ... 23:17:32 Removing compose_zookeeper_1 ... done 23:17:32 Removing kafka ... done 23:17:32 Removing grafana ... done 23:17:32 Removing prometheus ... done 23:17:32 Removing policy-api ... done 23:17:32 Removing simulator ... done 23:17:32 Removing policy-apex-pdp ... done 23:17:32 Removing mariadb ... done 23:17:32 Removing policy-db-migrator ... done 23:17:32 Removing policy-pap ... done 23:17:32 Removing network compose_default 23:17:32 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:32 + load_set 23:17:32 + _setopts=hxB 23:17:32 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:32 ++ tr : ' ' 23:17:32 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:32 + set +o braceexpand 23:17:32 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:32 + set +o hashall 23:17:32 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:32 + set +o interactive-comments 23:17:32 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:32 + set +o xtrace 23:17:32 ++ echo hxB 23:17:32 ++ sed 's/./& /g' 23:17:32 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:32 + set +h 23:17:32 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:32 + set +x 23:17:32 + [[ -n /tmp/tmp.C9xkkUvsOC ]] 23:17:32 + rsync -av /tmp/tmp.C9xkkUvsOC/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:33 sending incremental file list 23:17:33 ./ 23:17:33 log.html 23:17:33 output.xml 23:17:33 report.html 23:17:33 testplan.txt 23:17:33 23:17:33 sent 910,600 bytes received 95 bytes 607,130.00 bytes/sec 23:17:33 total size is 910,059 speedup is 1.00 23:17:33 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:33 + exit 0 23:17:33 $ ssh-agent -k 23:17:33 unset SSH_AUTH_SOCK; 23:17:33 unset SSH_AGENT_PID; 23:17:33 echo Agent pid 2083 killed; 23:17:33 [ssh-agent] Stopped. 23:17:33 Robot results publisher started... 23:17:33 -Parsing output xml: 23:17:33 Done! 23:17:33 WARNING! Could not find file: **/log.html 23:17:33 WARNING! Could not find file: **/report.html 23:17:33 -Copying log files to build dir: 23:17:33 Done! 23:17:33 -Assigning results to build: 23:17:33 Done! 23:17:33 -Checking thresholds: 23:17:33 Done! 23:17:33 Done publishing Robot results. 23:17:33 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:33 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11468459741754327868.sh 23:17:33 ---> sysstat.sh 23:17:34 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4735386884158819356.sh 23:17:34 ---> package-listing.sh 23:17:34 ++ facter osfamily 23:17:34 ++ tr '[:upper:]' '[:lower:]' 23:17:34 + OS_FAMILY=debian 23:17:34 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:34 + START_PACKAGES=/tmp/packages_start.txt 23:17:34 + END_PACKAGES=/tmp/packages_end.txt 23:17:34 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:34 + PACKAGES=/tmp/packages_start.txt 23:17:34 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:34 + PACKAGES=/tmp/packages_end.txt 23:17:34 + case "${OS_FAMILY}" in 23:17:34 + dpkg -l 23:17:34 + grep '^ii' 23:17:34 + '[' -f /tmp/packages_start.txt ']' 23:17:34 + '[' -f /tmp/packages_end.txt ']' 23:17:34 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:34 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:34 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:34 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:34 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17078034781118754770.sh 23:17:34 ---> capture-instance-metadata.sh 23:17:34 Setup pyenv: 23:17:34 system 23:17:34 3.8.13 23:17:34 3.9.13 23:17:34 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:34 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Zh8L from file:/tmp/.os_lf_venv 23:17:36 lf-activate-venv(): INFO: Installing: lftools 23:17:46 lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH 23:17:46 INFO: Running in OpenStack, capturing instance metadata 23:17:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10658688192205416812.sh 23:17:47 provisioning config files... 23:17:47 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config15755228189959814618tmp 23:17:47 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:47 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:47 [EnvInject] - Injecting environment variables from a build step. 23:17:47 [EnvInject] - Injecting as environment variables the properties content 23:17:47 SERVER_ID=logs 23:17:47 23:17:47 [EnvInject] - Variables injected successfully. 23:17:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins567138907153378295.sh 23:17:47 ---> create-netrc.sh 23:17:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10033323700225331421.sh 23:17:47 ---> python-tools-install.sh 23:17:47 Setup pyenv: 23:17:47 system 23:17:47 3.8.13 23:17:47 3.9.13 23:17:47 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:47 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Zh8L from file:/tmp/.os_lf_venv 23:17:49 lf-activate-venv(): INFO: Installing: lftools 23:17:56 lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH 23:17:56 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4580762017676075078.sh 23:17:56 ---> sudo-logs.sh 23:17:56 Archiving 'sudo' log.. 23:17:56 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12289679185341084929.sh 23:17:56 ---> job-cost.sh 23:17:56 Setup pyenv: 23:17:56 system 23:17:56 3.8.13 23:17:56 3.9.13 23:17:56 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:57 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Zh8L from file:/tmp/.os_lf_venv 23:17:58 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:18:05 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:18:05 lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. 23:18:05 lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH 23:18:05 INFO: No Stack... 23:18:05 INFO: Retrieving Pricing Info for: v3-standard-8 23:18:06 INFO: Archiving Costs 23:18:06 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4142273024307142563.sh 23:18:06 ---> logs-deploy.sh 23:18:06 Setup pyenv: 23:18:06 system 23:18:06 3.8.13 23:18:06 3.9.13 23:18:06 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:06 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Zh8L from file:/tmp/.os_lf_venv 23:18:07 lf-activate-venv(): INFO: Installing: lftools 23:18:16 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:18:16 python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. 23:18:16 lf-activate-venv(): INFO: Adding /tmp/venv-Zh8L/bin to PATH 23:18:16 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1544 23:18:16 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:18 Archives upload complete. 23:18:18 INFO: archiving logs to Nexus 23:18:19 ---> uname -a: 23:18:19 Linux prd-ubuntu1804-docker-8c-8g-14039 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:19 23:18:19 23:18:19 ---> lscpu: 23:18:19 Architecture: x86_64 23:18:19 CPU op-mode(s): 32-bit, 64-bit 23:18:19 Byte Order: Little Endian 23:18:19 CPU(s): 8 23:18:19 On-line CPU(s) list: 0-7 23:18:19 Thread(s) per core: 1 23:18:19 Core(s) per socket: 1 23:18:19 Socket(s): 8 23:18:19 NUMA node(s): 1 23:18:19 Vendor ID: AuthenticAMD 23:18:19 CPU family: 23 23:18:19 Model: 49 23:18:19 Model name: AMD EPYC-Rome Processor 23:18:19 Stepping: 0 23:18:19 CPU MHz: 2800.000 23:18:19 BogoMIPS: 5600.00 23:18:19 Virtualization: AMD-V 23:18:19 Hypervisor vendor: KVM 23:18:19 Virtualization type: full 23:18:19 L1d cache: 32K 23:18:19 L1i cache: 32K 23:18:19 L2 cache: 512K 23:18:19 L3 cache: 16384K 23:18:19 NUMA node0 CPU(s): 0-7 23:18:19 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:19 23:18:19 23:18:19 ---> nproc: 23:18:19 8 23:18:19 23:18:19 23:18:19 ---> df -h: 23:18:19 Filesystem Size Used Avail Use% Mounted on 23:18:19 udev 16G 0 16G 0% /dev 23:18:19 tmpfs 3.2G 708K 3.2G 1% /run 23:18:19 /dev/vda1 155G 15G 141G 10% / 23:18:19 tmpfs 16G 0 16G 0% /dev/shm 23:18:19 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:19 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:19 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:19 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:19 23:18:19 23:18:19 ---> free -m: 23:18:19 total used free shared buff/cache available 23:18:19 Mem: 32167 818 24663 0 6684 30892 23:18:19 Swap: 1023 0 1023 23:18:19 23:18:19 23:18:19 ---> ip addr: 23:18:19 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:19 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:19 inet 127.0.0.1/8 scope host lo 23:18:19 valid_lft forever preferred_lft forever 23:18:19 inet6 ::1/128 scope host 23:18:19 valid_lft forever preferred_lft forever 23:18:19 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:19 link/ether fa:16:3e:55:91:a0 brd ff:ff:ff:ff:ff:ff 23:18:19 inet 10.30.107.9/23 brd 10.30.107.255 scope global dynamic ens3 23:18:19 valid_lft 85921sec preferred_lft 85921sec 23:18:19 inet6 fe80::f816:3eff:fe55:91a0/64 scope link 23:18:19 valid_lft forever preferred_lft forever 23:18:19 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:19 link/ether 02:42:59:2f:99:71 brd ff:ff:ff:ff:ff:ff 23:18:19 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:19 valid_lft forever preferred_lft forever 23:18:19 23:18:19 23:18:19 ---> sar -b -r -n DEV: 23:18:19 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14039) 01/21/24 _x86_64_ (8 CPU) 23:18:19 23:18:19 23:10:21 LINUX RESTART (8 CPU) 23:18:19 23:18:19 23:11:02 tps rtps wtps bread/s bwrtn/s 23:18:19 23:12:01 115.44 17.88 97.56 1037.65 50308.90 23:18:19 23:13:01 147.16 23.26 123.90 2804.73 54130.98 23:18:19 23:14:01 186.35 0.20 186.15 23.86 117967.94 23:18:19 23:15:01 357.12 11.71 345.41 785.20 80471.02 23:18:19 23:16:01 16.91 0.28 16.63 13.20 429.36 23:18:19 23:17:01 4.63 0.10 4.53 12.66 130.16 23:18:19 23:18:01 74.69 1.40 73.29 107.45 4186.74 23:18:19 Average: 128.93 7.81 121.12 682.68 43930.22 23:18:19 23:18:19 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:19 23:12:01 30063060 31724500 2876160 8.73 74800 1892312 1447364 4.26 857748 1719516 223148 23:18:19 23:13:01 29482320 31729156 3456900 10.49 90392 2444792 1357784 3.99 935248 2186248 321440 23:18:19 23:14:01 25780524 31696004 7158696 21.73 137700 5914560 1396360 4.11 988228 5651772 1406332 23:18:19 23:15:01 23211448 29754844 9727772 29.53 157268 6479816 8515768 25.06 3093848 6020320 464 23:18:19 23:16:01 22858032 29406820 10081188 30.61 158644 6481624 8988128 26.45 3464268 5997728 300 23:18:19 23:17:01 22847688 29425920 10091532 30.64 158856 6509772 8806316 25.91 3455484 6013920 27056 23:18:19 23:18:01 25266780 31638164 7672440 23.29 161716 6320184 1532576 4.51 1264860 5853556 2088 23:18:19 Average: 25644265 30767915 7294955 22.15 134197 5149009 4577757 13.47 2008526 4777580 282975 23:18:19 23:18:19 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:19 23:12:01 ens3 64.51 42.87 978.90 7.69 0.00 0.00 0.00 0.00 23:18:19 23:12:01 lo 1.29 1.29 0.14 0.14 0.00 0.00 0.00 0.00 23:18:19 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:13:01 ens3 108.07 75.49 2345.11 9.59 0.00 0.00 0.00 0.00 23:18:19 23:13:01 lo 5.73 5.73 0.54 0.54 0.00 0.00 0.00 0.00 23:18:19 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:13:01 br-7883eafb062c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:14:01 ens3 1116.10 570.74 30113.15 41.32 0.00 0.00 0.00 0.00 23:18:19 23:14:01 lo 8.07 8.07 0.80 0.80 0.00 0.00 0.00 0.00 23:18:19 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:14:01 br-7883eafb062c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:15:01 ens3 76.30 36.39 2867.95 3.01 0.00 0.00 0.00 0.00 23:18:19 23:15:01 lo 1.13 1.13 0.09 0.09 0.00 0.00 0.00 0.00 23:18:19 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:15:01 vethaaf892a 1.80 1.92 0.18 0.19 0.00 0.00 0.00 0.00 23:18:19 23:16:01 ens3 4.82 4.10 1.02 1.22 0.00 0.00 0.00 0.00 23:18:19 23:16:01 lo 5.98 5.98 3.63 3.63 0.00 0.00 0.00 0.00 23:18:19 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:16:01 vethaaf892a 19.66 15.64 2.26 2.37 0.00 0.00 0.00 0.00 23:18:19 23:17:01 ens3 19.83 17.80 7.62 17.23 0.00 0.00 0.00 0.00 23:18:19 23:17:01 lo 8.53 8.53 0.65 0.65 0.00 0.00 0.00 0.00 23:18:19 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:17:01 vethaaf892a 13.93 9.40 1.06 1.34 0.00 0.00 0.00 0.00 23:18:19 23:18:01 ens3 59.22 38.03 69.51 16.88 0.00 0.00 0.00 0.00 23:18:19 23:18:01 lo 0.47 0.47 0.05 0.05 0.00 0.00 0.00 0.00 23:18:19 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 Average: ens3 207.31 112.37 5207.55 13.86 0.00 0.00 0.00 0.00 23:18:19 Average: lo 4.46 4.46 0.84 0.84 0.00 0.00 0.00 0.00 23:18:19 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:19 23:18:19 23:18:19 ---> sar -P ALL: 23:18:19 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14039) 01/21/24 _x86_64_ (8 CPU) 23:18:19 23:18:19 23:10:21 LINUX RESTART (8 CPU) 23:18:19 23:18:19 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:18:19 23:12:01 all 9.82 0.00 0.70 3.32 0.03 86.13 23:18:19 23:12:01 0 12.89 0.00 0.58 4.07 0.07 82.39 23:18:19 23:12:01 1 20.30 0.00 1.12 19.71 0.05 58.82 23:18:19 23:12:01 2 27.13 0.00 1.73 0.85 0.05 70.24 23:18:19 23:12:01 3 11.09 0.00 0.82 0.46 0.07 87.56 23:18:19 23:12:01 4 5.33 0.00 0.66 0.42 0.03 93.55 23:18:19 23:12:01 5 1.20 0.00 0.29 0.31 0.02 98.19 23:18:19 23:12:01 6 0.61 0.00 0.32 0.05 0.02 99.00 23:18:19 23:12:01 7 0.03 0.00 0.07 0.70 0.02 99.19 23:18:19 23:13:01 all 7.99 0.00 1.12 6.20 0.04 84.65 23:18:19 23:13:01 0 8.49 0.00 1.32 3.13 0.03 87.03 23:18:19 23:13:01 1 7.81 0.00 0.89 29.64 0.05 61.61 23:18:19 23:13:01 2 5.26 0.00 0.97 0.43 0.03 93.30 23:18:19 23:13:01 3 2.23 0.00 0.59 1.93 0.07 95.20 23:18:19 23:13:01 4 3.52 0.00 0.77 0.12 0.03 95.56 23:18:19 23:13:01 5 12.84 0.00 1.14 0.84 0.02 85.16 23:18:19 23:13:01 6 16.68 0.00 1.69 7.34 0.03 74.25 23:18:19 23:13:01 7 7.05 0.00 1.59 6.23 0.03 85.09 23:18:19 23:14:01 all 12.22 0.00 5.61 8.68 0.07 73.42 23:18:19 23:14:01 0 11.29 0.00 5.68 13.63 0.07 69.33 23:18:19 23:14:01 1 12.66 0.00 6.48 12.88 0.08 67.89 23:18:19 23:14:01 2 13.53 0.00 5.32 2.47 0.07 78.62 23:18:19 23:14:01 3 11.84 0.00 5.32 0.25 0.05 82.54 23:18:19 23:14:01 4 12.66 0.00 5.06 3.29 0.07 78.92 23:18:19 23:14:01 5 10.47 0.00 5.69 17.64 0.09 66.12 23:18:19 23:14:01 6 12.28 0.00 6.04 18.22 0.10 63.35 23:18:19 23:14:01 7 13.02 0.00 5.32 1.08 0.07 80.51 23:18:19 23:15:01 all 24.56 0.00 4.17 5.47 0.08 65.72 23:18:19 23:15:01 0 25.71 0.00 4.75 0.92 0.08 68.54 23:18:19 23:15:01 1 22.08 0.00 4.04 19.77 0.08 54.03 23:18:19 23:15:01 2 17.86 0.00 3.15 0.80 0.08 78.10 23:18:19 23:15:01 3 29.98 0.00 4.24 3.37 0.12 62.30 23:18:19 23:15:01 4 25.63 0.00 3.66 1.28 0.07 69.37 23:18:19 23:15:01 5 35.78 0.00 5.66 14.57 0.10 43.88 23:18:19 23:15:01 6 20.13 0.00 3.77 2.40 0.07 73.63 23:18:19 23:15:01 7 19.28 0.00 4.06 0.67 0.07 75.91 23:18:19 23:16:01 all 11.31 0.00 1.06 0.05 0.06 87.51 23:18:19 23:16:01 0 9.88 0.00 0.97 0.00 0.05 89.10 23:18:19 23:16:01 1 11.21 0.00 1.07 0.20 0.05 87.47 23:18:19 23:16:01 2 11.81 0.00 1.04 0.05 0.07 87.03 23:18:19 23:16:01 3 10.74 0.00 0.89 0.00 0.05 88.33 23:18:19 23:16:01 4 13.22 0.00 1.34 0.10 0.05 85.29 23:18:19 23:16:01 5 13.20 0.00 1.32 0.02 0.08 85.39 23:18:19 23:16:01 6 10.45 0.00 1.01 0.07 0.07 88.41 23:18:19 23:16:01 7 9.98 0.00 0.87 0.00 0.08 89.07 23:18:19 23:17:01 all 1.33 0.00 0.28 0.02 0.05 98.32 23:18:19 23:17:01 0 2.24 0.00 0.40 0.02 0.07 97.28 23:18:19 23:17:01 1 1.22 0.00 0.23 0.07 0.03 98.45 23:18:19 23:17:01 2 1.35 0.00 0.28 0.05 0.05 98.27 23:18:19 23:17:01 3 0.97 0.00 0.25 0.00 0.05 98.73 23:18:19 23:17:01 4 1.80 0.00 0.23 0.03 0.03 97.90 23:18:19 23:17:01 5 0.84 0.00 0.25 0.02 0.05 98.85 23:18:19 23:17:01 6 1.41 0.00 0.30 0.00 0.07 98.22 23:18:19 23:17:01 7 0.83 0.00 0.30 0.00 0.08 98.78 23:18:19 23:18:01 all 5.68 0.00 0.62 0.50 0.04 93.16 23:18:19 23:18:01 0 2.67 0.00 0.57 0.35 0.02 96.39 23:18:19 23:18:01 1 0.65 0.00 0.57 1.47 0.02 97.30 23:18:19 23:18:01 2 0.84 0.00 0.62 0.65 0.03 97.86 23:18:19 23:18:01 3 5.33 0.00 0.55 0.40 0.03 93.69 23:18:19 23:18:01 4 3.59 0.00 0.37 0.25 0.03 95.76 23:18:19 23:18:01 5 3.49 0.00 0.60 0.27 0.03 95.61 23:18:19 23:18:01 6 1.36 0.00 0.54 0.02 0.03 98.06 23:18:19 23:18:01 7 27.54 0.00 1.17 0.55 0.07 70.67 23:18:19 Average: all 10.40 0.00 1.93 3.45 0.05 84.17 23:18:19 Average: 0 10.43 0.00 2.03 3.14 0.06 84.34 23:18:19 Average: 1 10.80 0.00 2.05 11.92 0.05 75.17 23:18:19 Average: 2 11.06 0.00 1.86 0.75 0.06 86.27 23:18:19 Average: 3 10.30 0.00 1.80 0.92 0.06 86.92 23:18:19 Average: 4 9.37 0.00 1.72 0.78 0.05 88.09 23:18:19 Average: 5 11.12 0.00 2.13 4.78 0.06 81.92 23:18:19 Average: 6 8.99 0.00 1.94 3.99 0.06 85.02 23:18:19 Average: 7 11.12 0.00 1.91 1.32 0.06 85.60 23:18:19 23:18:19 23:18:19