Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137060 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-14134 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-gt5Y9b3lyNIX/agent.2104 SSH_AGENT_PID=2106 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/private_key_692360653604232643.key (/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/private_key_692360653604232643.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/60/137060/1 # timeout=30 > git rev-parse cf05a6c49bcdb598e106b0c4ae811af750608df3^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision cf05a6c49bcdb598e106b0c4ae811af750608df3 (refs/changes/60/137060/1) > git config core.sparsecheckout # timeout=10 > git checkout -f cf05a6c49bcdb598e106b0c4ae811af750608df3 # timeout=30 Commit message: "Add kafka support in K8s CSIT" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins3371567050161186764.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-yvbo lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-yvbo/bin to PATH Generating Requirements File Python 3.10.6 pip 23.3.2 from /tmp/venv-yvbo/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.1 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.23 botocore==1.34.23 bs4==0.0.2 cachetools==5.3.2 certifi==2023.11.17 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.5.0 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.26.2 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.4 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.1.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.1.1 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2023.3.post1 PyYAML==6.0.1 referencing==0.32.1 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/sh /tmp/jenkins15022405318179587223.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/sh -xe /tmp/jenkins17503398492636731034.sh + /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/run-project-csit.sh xacml-pdp + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/scripts:/bin + export SCRIPTS=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts + SCRIPTS=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=xacml-pdp + PROJECT=xacml-pdp + cd /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp + rm -rf /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/archives/xacml-pdp + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/archives/xacml-pdp + source_safely /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.HzOUjJV6EJ ++ echo ROBOT_VENV=/tmp/tmp.HzOUjJV6EJ +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.HzOUjJV6EJ ++ source /tmp/tmp.HzOUjJV6EJ/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.HzOUjJV6EJ +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/scripts:/bin +++ PATH=/tmp/tmp.HzOUjJV6EJ/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.HzOUjJV6EJ) ' '!=' x ']' +++ PS1='(tmp.HzOUjJV6EJ) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.HzOUjJV6EJ/src/onap ++ rm -rf /tmp/tmp.HzOUjJV6EJ/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ sed 's/./& /g' ++ echo ehuxB + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.HzOUjJV6EJ/bin/activate + '[' -z /tmp/tmp.HzOUjJV6EJ/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.HzOUjJV6EJ/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.HzOUjJV6EJ ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/scripts:/bin ++ PATH=/tmp/tmp.HzOUjJV6EJ/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.HzOUjJV6EJ) ' ++ '[' 'x(tmp.HzOUjJV6EJ) ' '!=' x ']' ++ PS1='(tmp.HzOUjJV6EJ) (tmp.HzOUjJV6EJ) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.EfRD3o2gBU + cd /tmp/tmp.EfRD3o2gBU + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/setup-xacml-pdp.sh + '[' -f /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/setup-xacml-pdp.sh ']' + echo 'Running setup script /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/setup-xacml-pdp.sh' Running setup script /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/setup-xacml-pdp.sh + source_safely /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/setup-xacml-pdp.sh + '[' -z /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/setup-xacml-pdp.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/setup-xacml-pdp.sh ++ source /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models +++ mkdir /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models Cloning into '/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models'... +++ export DATA=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/compose/start-compose.sh xacml-pdp +++ '[' -z /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' +++ COMPOSE_FOLDER=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/compose +++ grafana=false +++ gui=false +++ [[ 1 -gt 0 ]] +++ key=xacml-pdp +++ case $key in +++ echo xacml-pdp xacml-pdp +++ component=xacml-pdp +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z xacml-pdp ']' +++ '[' -n xacml-pdp ']' +++ '[' xacml-pdp == logs ']' +++ '[' false = true ']' +++ '[' false = true ']' +++ echo 'Starting xacml-pdp application' Starting xacml-pdp application +++ docker-compose up -d xacml-pdp Creating network "compose_default" with the default driver Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:eb47623eeab9aad8524ecc877b6708ae74b57f9f3cfe77554ad0d1521491cb5d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.0)... 3.1.0: Pulling from onap/policy-pap Digest: sha256:ff420a18fdd0393b657dcd1ae9e545437067fe5610606e3999888c21302a6231 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.0 Pulling xacml-pdp (nexus3.onap.org:10001/onap/policy-xacml-pdp:3.1.0)... 3.1.0: Pulling from onap/policy-xacml-pdp Digest: sha256:8e3183438bbb2bfbe8ba7c2545b8ec5b37187f4f260acc202c96c8c3ba80fb31 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-xacml-pdp:3.1.0 Creating compose_zookeeper_1 ... Creating mariadb ... Creating compose_zookeeper_1 ... done Creating kafka ... Creating kafka ... done Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-xacml-pdp ... Creating policy-xacml-pdp ... done +++ cd /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ++ sleep 10 ++ unset http_proxy https_proxy ++ export SUITES=xacml-pdp-test.robot ++ SUITES=xacml-pdp-test.robot ++ export KAFKA_IP=localhost:29092 ++ KAFKA_IP=localhost:29092 ++ /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/scripts/wait_for_rest.sh localhost 30004 Waiting for REST to come up on localhost port 30004... NAMES STATUS policy-xacml-pdp Up 10 seconds policy-pap Up 11 seconds policy-api Up 11 seconds policy-db-migrator Up 12 seconds kafka Up 15 seconds compose_zookeeper_1 Up 16 seconds mariadb Up 13 seconds NAMES STATUS policy-xacml-pdp Up 15 seconds policy-pap Up 16 seconds policy-api Up 17 seconds kafka Up 20 seconds compose_zookeeper_1 Up 21 seconds mariadb Up 19 seconds NAMES STATUS policy-xacml-pdp Up 20 seconds policy-pap Up 21 seconds policy-api Up 22 seconds kafka Up 25 seconds compose_zookeeper_1 Up 26 seconds mariadb Up 24 seconds NAMES STATUS policy-xacml-pdp Up 25 seconds policy-pap Up 26 seconds policy-api Up 27 seconds kafka Up 30 seconds compose_zookeeper_1 Up 31 seconds mariadb Up 29 seconds NAMES STATUS policy-xacml-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 32 seconds kafka Up 35 seconds compose_zookeeper_1 Up 36 seconds mariadb Up 34 seconds NAMES STATUS policy-xacml-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 37 seconds kafka Up 40 seconds compose_zookeeper_1 Up 41 seconds mariadb Up 39 seconds NAMES STATUS policy-pap Up 41 seconds policy-api Up 42 seconds kafka Up 45 seconds compose_zookeeper_1 Up 46 seconds mariadb Up 44 seconds NAMES STATUS policy-pap Up 46 seconds policy-api Up 47 seconds kafka Up 50 seconds compose_zookeeper_1 Up 51 seconds mariadb Up 49 seconds NAMES STATUS policy-pap Up 51 seconds policy-api Up 52 seconds kafka Up 55 seconds compose_zookeeper_1 Up 56 seconds mariadb Up 54 seconds NAMES STATUS policy-pap Up 56 seconds policy-api Up 57 seconds kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up 59 seconds NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute mariadb Up About a minute NAMES STATUS policy-pap Up About a minute policy-api Up About a minute kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up About a minute NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes NAMES STATUS policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes compose_zookeeper_1 Up 3 minutes mariadb Up 3 minutes NAMES STATUS policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 4 minutes compose_zookeeper_1 Up 4 minutes mariadb Up 4 minutes NAMES STATUS policy-pap Up 4 minutes policy-api Up 4 minutes kafka Up 5 minutes compose_zookeeper_1 Up 5 minutes mariadb Up 5 minutes NAMES STATUS policy-pap Up 5 minutes policy-api Up 5 minutes kafka Up 5 minutes compose_zookeeper_1 Up 5 minutes mariadb Up 5 minutes NAMES STATUS policy-pap Up 5 minutes policy-api Up 5 minutes kafka Up 5 minutes compose_zookeeper_1 Up 5 minutes mariadb Up 5 minutes localhost port 30004 REST cannot be detected ++ ROBOT_VARIABLES='-v DATA:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/policies -v POLICY_PDPX_IP:localhost:30004 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v KAFKA_IP:localhost:localhost:29092' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/archives/xacml-pdp/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 09:51:25 up 8 min, 0 users, load average: 0.11, 0.64, 0.46 Tasks: 175 total, 1 running, 110 sleeping, 0 stopped, 0 zombie %Cpu(s): 6.5 us, 1.2 sy, 0.0 ni, 89.3 id, 2.9 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.3G 24G 1.1M 5.1G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-pap Up 5 minutes policy-api Up 5 minutes kafka Up 5 minutes compose_zookeeper_1 Up 5 minutes mariadb Up 5 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 58a3591f1e63 policy-pap 0.39% 543MiB / 31.41GiB 1.69% 237kB / 283kB 0B / 181MB 64 cf59e61d3ed2 policy-api 0.16% 465MiB / 31.41GiB 1.45% 1MB / 711kB 0B / 0B 53 6fd56ef1331f kafka 1.06% 383.9MiB / 31.41GiB 1.19% 316kB / 274kB 0B / 733kB 83 27de249558c7 compose_zookeeper_1 2.57% 101.2MiB / 31.41GiB 0.31% 66.7kB / 57.5kB 229kB / 393kB 60 1fe94ab58c0d mariadb 0.03% 101.1MiB / 31.41GiB 0.31% 1.01MB / 1.2MB 11MB / 67.9MB 27 + echo + cd /tmp/tmp.EfRD3o2gBU + echo 'Reading the testplan:' Reading the testplan: + echo xacml-pdp-test.robot + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/xacml-pdp-test.robot ++ xargs + SUITES=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/xacml-pdp-test.robot + echo 'ROBOT_VARIABLES=-v DATA:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/policies -v POLICY_PDPX_IP:localhost:30004 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v KAFKA_IP:localhost:localhost:29092' ROBOT_VARIABLES=-v DATA:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/policies -v POLICY_PDPX_IP:localhost:30004 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v KAFKA_IP:localhost:localhost:29092 + echo 'Starting Robot test suites /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/xacml-pdp-test.robot ...' Starting Robot test suites /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/xacml-pdp-test.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N xacml-pdp -v WORKSPACE:/tmp -v DATA:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models/models-examples/src/main/resources/policies -v POLICY_PDPX_IP:localhost:30004 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v KAFKA_IP:localhost:localhost:29092 /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/xacml-pdp-test.robot ============================================================================== xacml-pdp ============================================================================== Healthcheck :: Verify policy xacml-pdp health check [ WARN ] Retrying (RetryAdapter(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/healthcheck?null [ WARN ] Retrying (RetryAdapter(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/healthcheck?null [ WARN ] Retrying (RetryAdapter(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/healthcheck?null | FAIL | ConnectionError: HTTPConnectionPool(host='localhost', port=30004): Max retries exceeded with url: /policy/pdpx/v1/healthcheck?null (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) ------------------------------------------------------------------------------ Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics [ WARN ] Retrying (RetryAdapter(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /metrics?null [ WARN ] Retrying (RetryAdapter(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /metrics?null [ WARN ] Retrying (RetryAdapter(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /metrics?null | FAIL | ConnectionError: HTTPConnectionPool(host='localhost', port=30004): Max retries exceeded with url: /metrics?null (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) ------------------------------------------------------------------------------ MakeTopics :: Creates the Policy topics | PASS | ------------------------------------------------------------------------------ ExecuteXacmlPolicy [ WARN ] Retrying (RetryAdapter(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true [ WARN ] Retrying (RetryAdapter(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /policy/pdpx/v1/decision?abbrev=true | FAIL | Keyword 'GetDefaultDecision' failed after retrying for 1 minute. The last error was: ConnectionError: HTTPConnectionPool(host='localhost', port=30004): Max retries exceeded with url: /policy/pdpx/v1/decision?abbrev=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) ------------------------------------------------------------------------------ xacml-pdp | FAIL | 4 tests, 1 passed, 3 failed ============================================================================== Output: /tmp/tmp.EfRD3o2gBU/output.xml Log: /tmp/tmp.EfRD3o2gBU/log.html Report: /tmp/tmp.EfRD3o2gBU/report.html + RESULT=3 + load_set + _setopts=hxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 3' RESULT: 3 + exit 3 + on_exit + rc=3 + [[ -n /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-pap Up 6 minutes policy-api Up 6 minutes kafka Up 6 minutes compose_zookeeper_1 Up 6 minutes mariadb Up 6 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 09:52:34 up 10 min, 0 users, load average: 0.17, 0.55, 0.44 Tasks: 172 total, 1 running, 108 sleeping, 0 stopped, 0 zombie %Cpu(s): 5.9 us, 1.1 sy, 0.0 ni, 90.1 id, 2.7 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.3G 24G 1.1M 5.1G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-pap Up 6 minutes policy-api Up 6 minutes kafka Up 6 minutes compose_zookeeper_1 Up 6 minutes mariadb Up 6 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 58a3591f1e63 policy-pap 0.44% 543.2MiB / 31.41GiB 1.69% 289kB / 345kB 0B / 181MB 64 cf59e61d3ed2 policy-api 0.10% 471.8MiB / 31.41GiB 1.47% 1.96MB / 1.02MB 0B / 0B 53 6fd56ef1331f kafka 0.96% 384.3MiB / 31.41GiB 1.19% 379kB / 326kB 0B / 799kB 83 27de249558c7 compose_zookeeper_1 0.10% 100.1MiB / 31.41GiB 0.31% 68.3kB / 58.4kB 229kB / 393kB 60 1fe94ab58c0d mariadb 0.02% 101.5MiB / 31.41GiB 0.32% 1.3MB / 2.13MB 11MB / 68.1MB 28 + echo + source_safely /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/compose/stop-compose.sh + '[' -z /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' ++ COMPOSE_FOLDER=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/compose ++ cd /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-xacml-pdp, policy-pap, policy-api, policy-db-migrator, kafka, compose_zookeeper_1, mariadb kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-01-22 09:46:10,996] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:host.name=6fd56ef1331f (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:10,997] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:11,000] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@23a5fd2 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:11,003] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-22 09:46:11,007] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-22 09:46:11,013] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:11,027] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:11,027] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:11,035] INFO Socket connection established, initiating session, client: /172.17.0.4:40672, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:11,069] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003540c0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:11,196] INFO Session: 0x1000003540c0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:11,196] INFO EventThread shut down for session: 0x1000003540c0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-01-22 09:46:11,883] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-01-22 09:46:12,263] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-22 09:46:12,323] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-01-22 09:46:12,324] INFO starting (kafka.server.KafkaServer) kafka | [2024-01-22 09:46:12,324] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-01-22 09:46:12,337] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-22 09:46:12,340] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,340] INFO Client environment:host.name=6fd56ef1331f (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,340] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,340] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,340] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,340] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,340] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,341] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-01-22 09:46:09,585] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,594] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,594] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,594] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,594] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,596] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-22 09:46:09,596] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-22 09:46:09,596] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-22 09:46:09,596] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-01-22 09:46:09,598] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-01-22 09:46:09,598] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,599] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,599] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,599] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,599] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 09:46:09,599] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-01-22 09:46:09,614] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-01-22 09:46:09,618] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-22 09:46:09,618] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-22 09:46:09,621] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-22 09:46:09,631] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,631] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:host.name=27de249558c7 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,632] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,633] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,634] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-01-22 09:46:09,634] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,635] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,635] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-22 09:46:09,635] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-22 09:46:09,636] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 09:46:09,636] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 09:46:09,636] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 09:46:09,636] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 09:46:09,636] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 09:46:09,636] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 09:46:09,638] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,638] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,639] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-22 09:46:09,639] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-22 09:46:09,639] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,657] INFO Logging initialized @560ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-01-22 09:46:09,738] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-22 09:46:09,738] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-22 09:46:09,755] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-22 09:46:09,790] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-22 09:46:09,790] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-22 09:46:09,792] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-22 09:46:09,795] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-01-22 09:46:09,804] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-22 09:46:09,816] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-01-22 09:46:09,817] INFO Started @721ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-22 09:46:09,818] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-01-22 09:46:09,822] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-22 09:46:09,823] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-22 09:46:09,825] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-22 09:46:09,827] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-22 09:46:09,843] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-22 09:46:09,844] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-22 09:46:09,845] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-22 09:46:09,845] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-22 09:46:09,851] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-01-22 09:46:09,851] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-22 09:46:09,854] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-22 09:46:09,855] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-22 09:46:09,855] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 09:46:09,863] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-01-22 09:46:09,863] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-01-22 09:46:09,881] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-01-22 09:46:09,882] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-01-22 09:46:11,048] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) kafka | [2024-01-22 09:46:12,341] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,342] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 09:46:12,346] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-22 09:46:12,351] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:12,352] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-22 09:46:12,354] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:12,387] INFO Socket connection established, initiating session, client: /172.17.0.4:40674, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:12,432] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003540c0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 09:46:12,437] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-22 09:46:12,826] INFO Cluster ID = BPdQ-V_zRw-DYkZRIBhaQQ (kafka.server.KafkaServer) kafka | [2024-01-22 09:46:12,829] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-01-22 09:46:12,871] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.5-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.2:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.4:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.6:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.4) policy-pap | policy-pap | [2024-01-22T09:46:38.286+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 35 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-01-22T09:46:38.288+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-01-22T09:46:40.509+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-01-22T09:46:40.622+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 7 JPA repository interfaces. policy-pap | [2024-01-22T09:46:41.107+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-01-22T09:46:41.108+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-01-22T09:46:41.942+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-01-22T09:46:41.953+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-01-22T09:46:41.955+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-01-22T09:46:41.955+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-pap | [2024-01-22T09:46:42.053+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-01-22T09:46:42.054+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3656 ms policy-pap | [2024-01-22T09:46:42.564+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-01-22T09:46:42.677+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-pap | [2024-01-22T09:46:42.680+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-pap | [2024-01-22T09:46:42.723+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2024-01-22T09:46:43.059+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2024-01-22T09:46:43.077+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-01-22T09:46:43.198+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee6291f policy-pap | [2024-01-22T09:46:43.200+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-01-22T09:46:43.230+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-pap | [2024-01-22T09:46:43.232+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-pap | [2024-01-22T09:46:45.178+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2024-01-22T09:46:45.181+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-01-22T09:46:45.800+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-01-22T09:46:46.423+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-01-22T09:46:46.521+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-01-22T09:46:46.785+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.5:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.4) policy-api | policy-api | [2024-01-22T09:46:23.576+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-01-22T09:46:23.577+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-01-22T09:46:25.396+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-01-22T09:46:25.481+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 6 JPA repository interfaces. policy-api | [2024-01-22T09:46:25.918+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-22T09:46:25.919+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-22T09:46:26.569+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-01-22T09:46:26.578+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-22T09:46:26.581+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-01-22T09:46:26.581+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-api | [2024-01-22T09:46:26.679+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-01-22T09:46:26.679+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3037 ms policy-api | [2024-01-22T09:46:27.108+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-01-22T09:46:27.178+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-01-22T09:46:27.181+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-01-22T09:46:27.217+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-01-22T09:46:27.581+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-01-22T09:46:27.599+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-01-22T09:46:27.697+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7c8d5312 policy-api | [2024-01-22T09:46:27.701+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-01-22T09:46:27.745+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-api | [2024-01-22T09:46:27.746+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-api | [2024-01-22T09:46:29.730+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-01-22T09:46:29.734+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-01-22T09:46:31.140+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-01-22T09:46:32.045+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-01-22T09:46:33.297+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-01-22T09:46:33.518+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@19a7e618, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@22ccd80f, org.springframework.security.web.context.SecurityContextHolderFilter@2f29400e, org.springframework.security.web.header.HeaderWriterFilter@56d3e4a9, org.springframework.security.web.authentication.logout.LogoutFilter@ab8b1ef, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@543d242e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@547a79cd, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@25e7e6d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@31829b82, org.springframework.security.web.access.ExceptionTranslationFilter@36c6d53b, org.springframework.security.web.access.intercept.AuthorizationFilter@680f7a5e] policy-api | [2024-01-22T09:46:34.590+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-01-22T09:46:34.662+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-22T09:46:34.686+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-01-22T09:46:34.702+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.837 seconds (process running for 12.47) policy-api | [2024-01-22T09:51:29.694+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-01-22T09:51:29.695+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-01-22T09:51:29.698+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms policy-api | [2024-01-22T09:51:29.975+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 531f91b3-d52b-4d23-8b1f-214c1df749e6 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-01-22T09:46:46.932+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-22T09:46:46.932+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-22T09:46:46.932+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916806930 policy-pap | [2024-01-22T09:46:46.934+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-1, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-01-22T09:46:46.934+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-01-22T09:46:46.940+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-22T09:46:46.940+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-22T09:46:46.940+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916806940 policy-pap | [2024-01-22T09:46:46.940+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-01-22 09:46:12,897] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-22 09:46:12,898] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-22 09:46:12,898] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-22 09:46:12,900] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-22 09:46:12,935] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-01-22 09:46:12,939] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-01-22 09:46:12,948] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) kafka | [2024-01-22 09:46:12,949] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-01-22 09:46:12,950] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-01-22 09:46:12,959] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-01-22 09:46:13,002] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-01-22 09:46:13,015] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-01-22 09:46:13,034] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-01-22 09:46:13,078] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-01-22 09:46:13,414] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-01-22 09:46:13,436] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-01-22 09:46:13,436] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-01-22 09:46:13,442] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-01-22 09:46:13,447] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-01-22 09:46:13,478] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-22 09:46:13,480] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-22 09:46:13,482] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-22 09:46:13,482] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-22 09:46:13,495] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-01-22 09:46:13,518] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-01-22 09:46:13,553] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705916773540,1705916773540,1,0,0,72057608332902401,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-01-22 09:46:13,554] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-01-22 09:46:13,613] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-01-22 09:46:13,620] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-22 09:46:13,626] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-22 09:46:13,628] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-22 09:46:13,629] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-01-22 09:46:13,643] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:13,648] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,650] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:13,653] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,658] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-01-22 09:46:13,667] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-01-22 09:46:13,676] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-01-22 09:46:13,676] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-01-22 09:46:13,691] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-01-22 09:46:13,691] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,698] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,705] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-pap | [2024-01-22T09:46:47.275+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-01-22T09:46:47.413+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-01-22T09:46:47.650+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@581d5b33, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1f365a26, org.springframework.security.web.context.SecurityContextHolderFilter@f287a4e, org.springframework.security.web.header.HeaderWriterFilter@497fd334, org.springframework.security.web.authentication.logout.LogoutFilter@49c4118b, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1238a074, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@71d2261e, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@1fa796a4, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@44a9971f, org.springframework.security.web.access.ExceptionTranslationFilter@3d3b852e, org.springframework.security.web.access.intercept.AuthorizationFilter@6a562255] policy-pap | [2024-01-22T09:46:48.485+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-01-22T09:46:48.544+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-01-22T09:46:48.582+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-01-22T09:46:48.598+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-01-22T09:46:48.598+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-01-22T09:46:48.598+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-01-22T09:46:48.599+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-01-22T09:46:48.599+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-01-22T09:46:48.600+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-01-22T09:46:48.600+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-01-22T09:46:48.605+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=531f91b3-d52b-4d23-8b1f-214c1df749e6, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4d6027be policy-pap | [2024-01-22T09:46:48.614+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=531f91b3-d52b-4d23-8b1f-214c1df749e6, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-22T09:46:48.614+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 531f91b3-d52b-4d23-8b1f-214c1df749e6 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 kafka | [2024-01-22 09:46:13,711] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-22 09:46:13,712] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,731] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,738] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,743] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-01-22 09:46:13,745] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-01-22 09:46:13,753] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-01-22 09:46:13,755] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,756] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | kafka | [2024-01-22 09:46:13,756] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,757] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,759] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-01-22 09:46:13,760] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,761] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,762] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,762] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-01-22 09:46:13,763] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-01-22 09:46:13,764] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,768] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-01-22 09:46:13,768] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-01-22 09:46:13,774] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-01-22 09:46:13,774] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-01-22 09:46:13,774] INFO Kafka startTimeMs: 1705916773770 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-01-22 09:46:13,775] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-01-22 09:46:13,775] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-01-22 09:46:13,776] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-01-22 09:46:13,777] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-01-22 09:46:13,778] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-01-22 09:46:13,778] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-01-22 09:46:13,778] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-01-22 09:46:13,789] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-01-22 09:46:13,790] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-01-22 09:46:13,791] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,798] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,798] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,799] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,799] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,801] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,811] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:13,888] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-22 09:46:13,894] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-01-22 09:46:13,952] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-01-22 09:46:18,813] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:18,814] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:49,159] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-01-22 09:46:49,165] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-01-22 09:46:49,209] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:49,247] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:49,272] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(OaftdXmYRROIVmYTHrFpHg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(jcMNsJOTTRCJff5JbP9Ksg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:49,274] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-01-22 09:46:49,276] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,276] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,276] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,276] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) mariadb | 2024-01-22 09:46:08+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-22 09:46:08+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-01-22 09:46:08+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-22 09:46:08+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-01-22 9:46:08 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-22 9:46:08 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-22 9:46:08 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-01-22 09:46:10+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-01-22 09:46:10+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-01-22 09:46:10+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-01-22 9:46:10 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-22 9:46:10 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-22 9:46:10 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-22 9:46:10 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF kafka | [2024-01-22 09:46:49,276] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,276] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,277] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,278] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,279] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-22 09:46:49,280] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | [2024-01-22T09:46:48.619+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-22T09:46:48.619+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-22T09:46:48.619+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916808619 policy-pap | [2024-01-22T09:46:48.619+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-01-22T09:46:48.620+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-01-22T09:46:48.620+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=aabd0ff4-43c4-4b5d-a08d-f5ed489ddc0a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6a262980 policy-pap | [2024-01-22T09:46:48.620+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=aabd0ff4-43c4-4b5d-a08d-f5ed489ddc0a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-22T09:46:48.620+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-22 9:46:10 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-01-22 9:46:10 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-22 9:46:10 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-22 9:46:10 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-22 9:46:10 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-22 9:46:10 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-01-22 09:46:11+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-01-22 09:46:12+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-01-22 09:46:12+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-01-22 09:46:12+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-01-22 09:46:12+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,287] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,288] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-01-22T09:46:48.624+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-22T09:46:48.624+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-22T09:46:48.624+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916808624 policy-pap | [2024-01-22T09:46:48.624+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-01-22T09:46:48.625+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-01-22T09:46:48.625+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=aabd0ff4-43c4-4b5d-a08d-f5ed489ddc0a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-22T09:46:48.625+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=531f91b3-d52b-4d23-8b1f-214c1df749e6, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-22T09:46:48.625+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0db60be7-d10a-4619-acd1-46983da62ed9, alive=false, publisher=null]]: starting policy-pap | [2024-01-22T09:46:48.640+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,289] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-22 09:46:49,290] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,460] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,461] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,465] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,465] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,465] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,465] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,465] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,466] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,467] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,467] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,467] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,467] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,467] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,467] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,467] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,468] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-22 09:46:49,468] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-01-22T09:46:48.651+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-01-22T09:46:48.666+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-22T09:46:48.666+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-22T09:46:48.666+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916808666 policy-pap | [2024-01-22T09:46:48.666+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0db60be7-d10a-4619-acd1-46983da62ed9, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-01-22T09:46:48.666+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4e2e724c-4ea4-4504-84c4-91ab75b7b55c, alive=false, publisher=null]]: starting policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql mariadb | policy-xacml-pdp | Waiting for mariadb port 3306... policy-xacml-pdp | mariadb (172.17.0.2:3306) open policy-xacml-pdp | Waiting for kafka port 9092... policy-xacml-pdp | kafka (172.17.0.4:9092) open policy-xacml-pdp | Waiting for pap port 6969... policy-xacml-pdp | pap (172.17.0.7:6969) open policy-xacml-pdp | + KEYSTORE=/opt/app/policy/pdpx/etc/ssl/policy-keystore policy-xacml-pdp | + TRUSTSTORE=/opt/app/policy/pdpx/etc/ssl/policy-truststore policy-xacml-pdp | + KEYSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + TRUSTSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + '[' 0 -ge 1 ] policy-xacml-pdp | + CONFIG_FILE= policy-xacml-pdp | + '[' -z ] policy-xacml-pdp | + CONFIG_FILE=/opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-truststore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-keystore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/xacml.properties ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/logback.xml ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/createguardtable.sql ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/db.sql ] policy-xacml-pdp | + /opt/app/policy/pdpx/mysql/bin/create-guard-table.sh policy-xacml-pdp | + SQL_FILE=/opt/app/policy/pdpx/mysql/sql/createguardtable.sql policy-xacml-pdp | + SQL_ADDON_FILE=/opt/app/policy/pdpx/mysql/sql/db.sql policy-xacml-pdp | + sed 's/\\//g' /opt/app/policy/pdpx/apps/guard/xacml.properties policy-xacml-pdp | + '[' '!' -f /tmp/temp.xacml.properties ] policy-xacml-pdp | + awk '-F[/:]' '$1 == "jakarta.persistence.jdbc.url=jdbc" { print $3 $5 }' /tmp/temp.xacml.properties policy-xacml-pdp | + DB_HOSTNAME=mariadb policy-xacml-pdp | + awk '-F=' '$1 == "jakarta.persistence.jdbc.user" { print $2 }' /tmp/temp.xacml.properties policy-xacml-pdp | + DB_USERNAME=policy_user policy-xacml-pdp | + awk '-F=' '$1 == "jakarta.persistence.jdbc.password" { st = index($0,"="); print substr($0,st+1) }' /tmp/temp.properties policy-xacml-pdp | awk: /tmp/temp.properties: No such file or directory policy-xacml-pdp | + DB_PASSWORD= policy-xacml-pdp | + rm /tmp/temp.xacml.properties policy-xacml-pdp | + '[' -z mariadb ] policy-xacml-pdp | + '[' -z policy_user ] policy-xacml-pdp | + '[' -z ] policy-xacml-pdp | + echo 'No db password provided in guard xacml.properties.' policy-xacml-pdp | + exit 2 policy-xacml-pdp | No db password provided in guard xacml.properties. policy-xacml-pdp | Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + echo 'Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json' policy-xacml-pdp | + /usr/lib/jvm/default-jvm/bin/java -cp '/opt/app/policy/pdpx/etc:/opt/app/policy/pdpx/lib/*' '-Dlogback.configurationFile=/opt/app/policy/pdpx/etc/logback.xml' '-Djavax.net.ssl.keyStore=/opt/app/policy/pdpx/etc/ssl/policy-keystore' '-Djavax.net.ssl.keyStorePassword=Pol1cy_0nap' '-Djavax.net.ssl.trustStore=/opt/app/policy/pdpx/etc/ssl/policy-truststore' '-Djavax.net.ssl.trustStorePassword=Pol1cy_0nap' org.onap.policy.pdpx.main.startstop.Main -c /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | [2024-01-22T09:46:49.855+00:00|INFO|Main|main] Starting policy xacml pdp service with arguments - [-c, /opt/app/policy/pdpx/etc/defaultConfig.json] policy-xacml-pdp | [2024-01-22T09:46:50.022+00:00|INFO|XacmlPdpActivator|main] Activator initializing using org.onap.policy.pdpx.main.parameters.XacmlPdpParameterGroup@574b560f policy-xacml-pdp | [2024-01-22T09:46:50.068+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-e05b1419-7a24-40a9-8770-7ef720bc0059-1 policy-xacml-pdp | client.rack = policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = e05b1419-7a24-40a9-8770-7ef720bc0059 policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-xacml-pdp | ssl.cipher.suites = null kafka | [2024-01-22 09:46:49,468] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-01-22T09:46:48.667+00:00|INFO|ProducerConfig|main] ProducerConfig values: mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" policy-db-migrator | policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-22 09:46:49,468] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | acks = -1 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-xacml-pdp | ssl.endpoint.identification.algorithm = https kafka | [2024-01-22 09:46:49,468] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | auto.include.jmx.reporter = true mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql policy-db-migrator | -------------- policy-xacml-pdp | ssl.engine.factory.class = null kafka | [2024-01-22 09:46:49,468] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | batch.size = 16384 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-xacml-pdp | ssl.key.password = null kafka | [2024-01-22 09:46:49,468] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] mariadb | policy-db-migrator | -------------- policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-22 09:46:49,469] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | buffer.memory = 33554432 mariadb | 2024-01-22 09:46:13+00:00 [Note] [Entrypoint]: Stopping temporary server policy-db-migrator | policy-xacml-pdp | ssl.keystore.certificate.chain = null kafka | [2024-01-22 09:46:49,469] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips mariadb | 2024-01-22 9:46:13 0 [Note] mariadbd (initiated by: unknown): Normal shutdown policy-db-migrator | policy-xacml-pdp | ssl.keystore.key = null kafka | [2024-01-22 09:46:49,469] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.id = producer-2 mariadb | 2024-01-22 9:46:13 0 [Note] InnoDB: FTS optimize thread exiting. policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-xacml-pdp | ssl.keystore.location = null kafka | [2024-01-22 09:46:49,471] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | compression.type = none mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Starting shutdown... policy-db-migrator | -------------- policy-xacml-pdp | ssl.keystore.password = null kafka | [2024-01-22 09:46:49,471] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-xacml-pdp | ssl.keystore.type = JKS kafka | [2024-01-22 09:46:49,471] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | delivery.timeout.ms = 120000 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Buffer pool(s) dump completed at 240122 9:46:14 policy-db-migrator | -------------- policy-xacml-pdp | ssl.protocol = TLSv1.3 kafka | [2024-01-22 09:46:49,471] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | enable.idempotence = true mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" policy-db-migrator | policy-xacml-pdp | ssl.provider = null kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | interceptor.classes = [] mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Shutdown completed; log sequence number 330574; transaction id 298 policy-db-migrator | policy-xacml-pdp | ssl.secure.random.implementation = null kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer mariadb | 2024-01-22 9:46:14 0 [Note] mariadbd: Shutdown complete policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | linger.ms = 0 mariadb | policy-db-migrator | -------------- policy-xacml-pdp | ssl.truststore.certificates = null kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | max.block.ms = 60000 mariadb | 2024-01-22 09:46:14+00:00 [Note] [Entrypoint]: Temporary server stopped policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-xacml-pdp | ssl.truststore.location = null kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 mariadb | policy-db-migrator | -------------- policy-xacml-pdp | ssl.truststore.password = null kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | max.request.size = 1048576 mariadb | 2024-01-22 09:46:14+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. policy-db-migrator | kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-xacml-pdp | ssl.truststore.type = JKS policy-pap | metadata.max.age.ms = 300000 mariadb | policy-db-migrator | kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | metadata.max.idle.ms = 300000 mariadb | 2024-01-22 9:46:14 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... kafka | [2024-01-22 09:46:49,472] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-xacml-pdp | policy-pap | metric.reporters = [] mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.230+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | metrics.num.samples = 2 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Number of transaction pools: 1 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.230+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | metrics.recording.level = INFO mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.230+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916810228 policy-pap | metrics.sample.window.ms = 30000 mariadb | 2024-01-22 9:46:14 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-db-migrator | kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.232+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e05b1419-7a24-40a9-8770-7ef720bc0059-1, groupId=e05b1419-7a24-40a9-8770-7ef720bc0059] Subscribed to topic(s): policy-pdp-pap policy-pap | partitioner.adaptive.partitioning.enable = true mariadb | 2024-01-22 9:46:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-db-migrator | kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.299+00:00|INFO|XacmlPdpApplicationManager|main] Initialization applications org.onap.policy.pdpx.main.parameters.XacmlApplicationParameters@56113384 JerseyClient(name=policyApiParameters, https=false, selfSignedCerts=false, hostname=policy-api, port=6969, basePath=null, userName=policyadmin, password=zb!XztG34, client=org.glassfish.jersey.client.JerseyClient@61f05988, baseUrl=http://policy-api:6969/, alive=true) policy-pap | partitioner.availability.timeout.ms = 0 mariadb | 2024-01-22 9:46:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.316+00:00|INFO|XacmlPdpApplicationManager|main] Application optimization supports [onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0] policy-pap | partitioner.class = null mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.317+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath optimization at this path /opt/app/policy/pdpx/apps/optimization policy-pap | partitioner.ignore.keys = false mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Completed initialization of buffer pool policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.317+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/optimization policy-pap | receive.buffer.bytes = 32768 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,473] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.318+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-pap | reconnect.backoff.max.ms = 1000 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: 128 rollback segments are active. policy-db-migrator | kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.319+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-pap | reconnect.backoff.ms = 50 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-db-migrator | kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-pap | request.timeout.ms = 30000 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.319+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-pap | retries = 2147483647 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: log sequence number 330574; transaction id 299 policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.319+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-xacml-pdp | [2024-01-22T09:46:50.319+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory mariadb | 2024-01-22 9:46:14 0 [Note] Plugin 'FEEDBACK' is disabled. policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-xacml-pdp | [2024-01-22T09:46:50.319+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory mariadb | 2024-01-22 9:46:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-db-migrator | kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-pap | sasl.jaas.config = null policy-xacml-pdp | [2024-01-22T09:46:50.319+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory mariadb | 2024-01-22 9:46:14 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-db-migrator | kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | [2024-01-22T09:46:50.320+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides mariadb | 2024-01-22 9:46:14 0 [Note] Server socket created on IP: '0.0.0.0'. policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | [2024-01-22T09:46:50.320+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> mariadb | 2024-01-22 9:46:14 0 [Note] Server socket created on IP: '::'. kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-xacml-pdp | [2024-01-22T09:46:50.320+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-db-migrator | -------------- mariadb | 2024-01-22 9:46:14 0 [Note] mariadbd: ready for connections. kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | [2024-01-22T09:46:50.320+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution kafka | [2024-01-22 09:46:49,474] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.320+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 mariadb | 2024-01-22 9:46:14 0 [Note] InnoDB: Buffer pool(s) load completed at 240122 9:46:14 kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.320+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null mariadb | 2024-01-22 9:46:15 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.321+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-db-migrator | policy-pap | sasl.login.class = null mariadb | 2024-01-22 9:46:15 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.321+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | sasl.login.connect.timeout.ms = null mariadb | 2024-01-22 9:46:15 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.322+00:00|WARN|XACMLProperties|main] Properties file /usr/lib/jvm/java-17-openjdk/lib/xacml.properties cannot be read. policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null mariadb | 2024-01-22 9:46:15 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.5' (This connection closed normally without authentication) kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.349+00:00|INFO|XacmlPdpApplicationManager|main] Application monitoring supports [onap.Monitoring 1.0.0] policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.349+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath monitoring at this path /opt/app/policy/pdpx/apps/monitoring policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.349+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/monitoring policy-db-migrator | policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.349+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | [2024-01-22T09:46:50.350+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.350+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-pap | sasl.mechanism = GSSAPI kafka | [2024-01-22 09:46:49,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | [2024-01-22T09:46:50.350+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory kafka | [2024-01-22 09:46:49,476] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | policy-xacml-pdp | [2024-01-22T09:46:50.350+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory kafka | [2024-01-22 09:46:49,476] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | kafka | [2024-01-22 09:46:49,476] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.351+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-01-22 09:46:49,476] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.351+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,476] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.351+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) kafka | [2024-01-22 09:46:49,476] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.351+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,476] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.351+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | [2024-01-22 09:46:49,477] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.352+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | kafka | [2024-01-22 09:46:49,483] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.352+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.352+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-xacml-pdp | [2024-01-22T09:46:50.352+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | security.providers = null policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.352+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | policy-xacml-pdp | [2024-01-22T09:46:50.356+00:00|INFO|XacmlPdpApplicationManager|main] Application guard supports [onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0] kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | policy-xacml-pdp | [2024-01-22T09:46:50.356+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath guard at this path /opt/app/policy/pdpx/apps/guard kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-xacml-pdp | [2024-01-22T09:46:50.356+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/guard kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.356+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/guard/xacml.properties kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-xacml-pdp | [2024-01-22T09:46:50.357+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- policy-xacml-pdp | {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.mariadb.jdbc.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:mariadb://mariadb:3306/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, eclipselink.target-database=MySQL, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-xacml-pdp | [2024-01-22T09:46:50.357+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.persistenceunit -> OperationsHistoryPU policy-db-migrator | kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.key.password = null policy-xacml-pdp | [2024-01-22T09:46:50.357+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.name -> GetOperationOutcome policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | [2024-01-22T09:46:50.358+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-xacml-pdp | [2024-01-22T09:46:50.358+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2024-01-22T09:46:50.358+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-pap | ssl.keystore.key = null policy-xacml-pdp | [2024-01-22T09:46:50.358+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-22 09:46:49,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.location = null kafka | [2024-01-22 09:46:49,488] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-22 09:46:49,488] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.358+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-pap | ssl.keystore.password = null kafka | [2024-01-22 09:46:49,488] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS kafka | [2024-01-22 09:46:49,488] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-22 09:46:49,488] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-01-22 09:46:49,488] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.provider = null kafka | [2024-01-22 09:46:49,488] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.359+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip policy-db-migrator | policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-22 09:46:49,488] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.359+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.description -> Returns operation outcome policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-22 09:46:49,489] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-xacml-pdp | [2024-01-22T09:46:50.359+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.description -> Returns operation counts based on time window policy-pap | ssl.truststore.certificates = null kafka | [2024-01-22 09:46:49,489] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.359+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.password -> policy_user kafka | [2024-01-22 09:46:49,489] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-pap | ssl.truststore.location = null policy-xacml-pdp | [2024-01-22T09:46:50.359+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory kafka | [2024-01-22 09:46:49,490] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null policy-xacml-pdp | [2024-01-22T09:46:50.360+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.issuer -> urn:org:onap:xacml:guard:get-operation-outcome kafka | [2024-01-22 09:46:49,490] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.type = JKS policy-xacml-pdp | [2024-01-22T09:46:50.360+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.persistenceunit -> OperationsHistoryPU kafka | [2024-01-22 09:46:49,490] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | transaction.timeout.ms = 60000 policy-xacml-pdp | [2024-01-22T09:46:50.360+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.driver -> org.mariadb.jdbc.Driver kafka | [2024-01-22 09:46:49,490] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-pap | transactional.id = null policy-xacml-pdp | [2024-01-22T09:46:50.360+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.name -> CountRecentOperations kafka | [2024-01-22 09:46:49,490] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | [2024-01-22T09:46:50.360+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory kafka | [2024-01-22 09:46:49,490] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-pap | kafka | [2024-01-22 09:46:49,490] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.361+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:48.667+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-01-22 09:46:49,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.361+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.url -> jdbc:mariadb://mariadb:3306/operationshistory policy-db-migrator | policy-pap | [2024-01-22T09:46:48.670+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-22 09:46:49,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.361+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.user -> policy_user policy-db-migrator | policy-pap | [2024-01-22T09:46:48.670+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-22 09:46:49,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.361+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | [2024-01-22T09:46:48.670+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916808670 kafka | [2024-01-22 09:46:49,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.361+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.issuer -> urn:org:onap:xacml:guard:count-recent-operations policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:48.670+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4e2e724c-4ea4-4504-84c4-91ab75b7b55c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-01-22 09:46:49,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.361+00:00|INFO|XacmlPolicyUtils|main] xacml.pip.engines -> count-recent-operations,get-operation-outcome policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-pap | [2024-01-22T09:46:48.670+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator kafka | [2024-01-22 09:46:49,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.361+00:00|INFO|XacmlPolicyUtils|main] eclipselink.target-database -> MySQL policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:48.671+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher kafka | [2024-01-22 09:46:49,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.361+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-db-migrator | policy-pap | [2024-01-22T09:46:48.672+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher kafka | [2024-01-22 09:46:49,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.362+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip policy-db-migrator | policy-pap | [2024-01-22T09:46:48.673+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers kafka | [2024-01-22 09:46:49,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.362+00:00|INFO|StdXacmlApplicationServiceProvider|main] {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.mariadb.jdbc.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:mariadb://mariadb:3306/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, eclipselink.target-database=MySQL, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | [2024-01-22T09:46:48.676+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers kafka | [2024-01-22 09:46:49,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.363+00:00|INFO|XacmlPdpApplicationManager|main] Application native supports [onap.policies.native.Xacml 1.0.0] policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:48.676+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-xacml-pdp | [2024-01-22T09:46:50.363+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath native at this path /opt/app/policy/pdpx/apps/native kafka | [2024-01-22 09:46:49,494] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-01-22T09:46:48.677+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests kafka | [2024-01-22 09:46:49,494] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T09:46:48.677+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-xacml-pdp | [2024-01-22T09:46:50.363+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/native policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,494] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T09:46:48.677+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-xacml-pdp | [2024-01-22T09:46:50.363+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/native/xacml.properties policy-db-migrator | kafka | [2024-01-22 09:46:49,495] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.363+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties kafka | [2024-01-22 09:46:49,495] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T09:46:48.678+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} kafka | [2024-01-22 09:46:49,495] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T09:46:48.681+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-xacml-pdp | [2024-01-22T09:46:50.364+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> kafka | [2024-01-22 09:46:49,495] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T09:46:48.682+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.294 seconds (process running for 11.998) policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.364+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory kafka | [2024-01-22 09:46:49,495] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | [2024-01-22T09:46:49.158+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-xacml-pdp | [2024-01-22T09:46:50.364+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory kafka | [2024-01-22 09:46:49,499] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | [2024-01-22T09:46:49.159+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: BPdQ-V_zRw-DYkZRIBhaQQ policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.364+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:49.160+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: BPdQ-V_zRw-DYkZRIBhaQQ policy-db-migrator | policy-xacml-pdp | [2024-01-22T09:46:50.364+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:49.160+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: BPdQ-V_zRw-DYkZRIBhaQQ policy-db-migrator | policy-xacml-pdp | [2024-01-22T09:46:50.364+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:49.247+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-xacml-pdp | [2024-01-22T09:46:50.364+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:49.256+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.365+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:49.256+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-xacml-pdp | [2024-01-22T09:46:50.365+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:49.260+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.365+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-xacml-pdp | [2024-01-22T09:46:50.365+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-pap | [2024-01-22T09:46:49.260+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Cluster ID: BPdQ-V_zRw-DYkZRIBhaQQ kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-xacml-pdp | [2024-01-22T09:46:50.365+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-pap | [2024-01-22T09:46:49.373+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-xacml-pdp | [2024-01-22T09:46:50.365+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-pap | [2024-01-22T09:46:49.406+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-xacml-pdp | [2024-01-22T09:46:50.366+00:00|INFO|XacmlPdpApplicationManager|main] Application match supports [onap.policies.Match 1.0.0] policy-pap | [2024-01-22T09:46:49.489+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-01-22T09:46:49.512+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.366+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath match at this path /opt/app/policy/pdpx/apps/match policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:49.595+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.366+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/match policy-db-migrator | policy-pap | [2024-01-22T09:46:49.620+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.367+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/match/xacml.properties policy-db-migrator | policy-pap | [2024-01-22T09:46:49.701+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.367+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | [2024-01-22T09:46:49.726+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:49.809+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.367+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-01-22T09:46:49.834+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.367+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:49.914+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.367+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-22T09:46:49.954+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.367+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-22T09:46:50.019+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.367+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory kafka | [2024-01-22 09:46:49,501] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | [2024-01-22T09:46:50.058+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.368+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.125+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 19 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.368+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.163+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.368+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.232+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 21 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.368+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.267+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.368+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-db-migrator | kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.338+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 23 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.368+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-db-migrator | kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.374+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.368+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.444+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 25 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.368+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.478+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2024-01-22T09:46:50.369+00:00|INFO|XacmlPdpApplicationManager|main] Application naming supports [onap.policies.Naming 1.0.0] policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.555+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-xacml-pdp | [2024-01-22T09:46:50.369+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath naming at this path /opt/app/policy/pdpx/apps/naming policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.560+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-xacml-pdp | [2024-01-22T09:46:50.369+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/naming policy-db-migrator | kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.585+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-xacml-pdp | [2024-01-22T09:46:50.370+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/naming/xacml.properties policy-db-migrator | kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.370+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | [2024-01-22T09:46:50.589+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] (Re-)joining group kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:50.592+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-79317c40-45e7-478a-a027-76c9ee36ccca kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.370+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-01-22T09:46:50.592+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.370+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-db-migrator | -------------- policy-pap | [2024-01-22T09:46:50.592+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-xacml-pdp | [2024-01-22T09:46:50.370+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-db-migrator | policy-pap | [2024-01-22T09:46:50.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Request joining group due to: need to re-join with the given member-id: consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3-266a05a0-deeb-4c96-b046-685b0c02ecc9 policy-xacml-pdp | [2024-01-22T09:46:50.370+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-db-migrator | kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-xacml-pdp | [2024-01-22T09:46:50.370+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:50.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] (Re-)joining group policy-xacml-pdp | [2024-01-22T09:46:50.371+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Successfully joined group with generation Generation{generationId=1, memberId='consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3-266a05a0-deeb-4c96-b046-685b0c02ecc9', protocol='range'} policy-xacml-pdp | [2024-01-22T09:46:50.371+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.626+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-79317c40-45e7-478a-a027-76c9ee36ccca', protocol='range'} policy-xacml-pdp | [2024-01-22T09:46:50.371+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,502] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.633+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-79317c40-45e7-478a-a027-76c9ee36ccca=Assignment(partitions=[policy-pdp-pap-0])} policy-xacml-pdp | [2024-01-22T09:46:50.371+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-db-migrator | kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.633+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Finished assignment for group at generation 1: {consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3-266a05a0-deeb-4c96-b046-685b0c02ecc9=Assignment(partitions=[policy-pdp-pap-0])} policy-xacml-pdp | [2024-01-22T09:46:50.371+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-db-migrator | kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.694+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Successfully synced group in generation Generation{generationId=1, memberId='consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3-266a05a0-deeb-4c96-b046-685b0c02ecc9', protocol='range'} policy-xacml-pdp | [2024-01-22T09:46:50.371+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.694+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-79317c40-45e7-478a-a027-76c9ee36ccca', protocol='range'} policy-xacml-pdp | [2024-01-22T09:46:50.371+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.695+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.695+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-xacml-pdp | [2024-01-22T09:46:50.371+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.701+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-xacml-pdp | [2024-01-22T09:46:50.372+00:00|INFO|XacmlPdpApplicationManager|main] Finished applications initialization {optimize=org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplication@2371aaca, native=org.onap.policy.xacml.pdp.application.nativ.NativePdpApplication@5b529706, guard=org.onap.policy.xacml.pdp.application.guard.GuardPdpApplication@63fdab07, naming=org.onap.policy.xacml.pdp.application.naming.NamingPdpApplication@7b5a12ae, match=org.onap.policy.xacml.pdp.application.match.MatchPdpApplication@5553d0f5, configure=org.onap.policy.xacml.pdp.application.monitoring.MonitoringPdpApplication@1af687fe} policy-db-migrator | kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.700+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Adding newly assigned partitions: policy-pdp-pap-0 policy-xacml-pdp | Exception in thread "main" org.onap.policy.pdpx.main.PolicyXacmlPdpRuntimeException: Start of policy-xacml-pdp service failed. policy-db-migrator | kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.717+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-01-22T09:46:53.717+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Found no committed offset for partition policy-pdp-pap-0 policy-xacml-pdp | at org.onap.policy.pdpx.main.startstop.Main.main(Main.java:113) kafka | [2024-01-22 09:46:49,503] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T09:46:53.737+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-xacml-pdp | Caused by: org.onap.policy.pdpx.main.PolicyXacmlPdpRuntimeException: no sinks for topic: POLICY-PDP-PAP kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | [2024-01-22T09:46:53.737+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3, groupId=531f91b3-d52b-4d23-8b1f-214c1df749e6] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- policy-xacml-pdp | at org.onap.policy.pdpx.main.startstop.XacmlPdpActivator.(XacmlPdpActivator.java:138) kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-01-22T09:48:48.678+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-xacml-pdp | at org.onap.policy.pdpx.main.startstop.Main.(Main.java:76) kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | [2024-01-22T09:50:48.744+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-db-migrator | -------------- policy-xacml-pdp | at org.onap.policy.pdpx.main.startstop.Main.main(Main.java:110) kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | policy-xacml-pdp | Caused by: org.onap.policy.common.endpoints.event.comm.client.BidirectionalTopicClientException: no sinks for topic: POLICY-PDP-PAP kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | policy-xacml-pdp | at org.onap.policy.common.endpoints.event.comm.client.BidirectionalTopicClient.(BidirectionalTopicClient.java:81) kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | > upgrade 0100-pdp.sql policy-xacml-pdp | at org.onap.policy.pdpx.main.startstop.XacmlPdpActivator.(XacmlPdpActivator.java:117) kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | -------------- policy-xacml-pdp | ... 2 more kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,533] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | > upgrade 0130-pdpstatistics.sql kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | > upgrade 0150-pdpstatistics.sql kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,534] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | UPDATE jpapdpstatistics_enginestats a kafka | [2024-01-22 09:46:49,535] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-db-migrator | JOIN pdpstatistics b kafka | [2024-01-22 09:46:49,536] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp kafka | [2024-01-22 09:46:49,580] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | SET a.id = b.id kafka | [2024-01-22 09:46:49,591] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,594] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,596] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,599] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql kafka | [2024-01-22 09:46:49,625] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,626] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp kafka | [2024-01-22 09:46:49,627] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,627] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,627] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,636] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql kafka | [2024-01-22 09:46:49,636] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,637] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) kafka | [2024-01-22 09:46:49,637] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,637] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,645] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,645] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-01-22 09:46:49,645] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,646] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) kafka | [2024-01-22 09:46:49,646] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,652] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,653] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,653] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0210-sequence.sql kafka | [2024-01-22 09:46:49,653] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,653] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-22 09:46:49,661] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-db-migrator | kafka | [2024-01-22 09:46:49,662] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,662] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0220-sequence.sql kafka | [2024-01-22 09:46:49,662] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,662] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-01-22 09:46:49,669] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,670] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,670] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,670] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql kafka | [2024-01-22 09:46:49,670] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,677] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) kafka | [2024-01-22 09:46:49,678] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,678] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,678] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,678] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql kafka | [2024-01-22 09:46:49,686] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,686] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-01-22 09:46:49,686] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,687] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,687] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,693] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0120-toscatrigger.sql kafka | [2024-01-22 09:46:49,694] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,694] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger kafka | [2024-01-22 09:46:49,694] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,695] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,703] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,704] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql kafka | [2024-01-22 09:46:49,704] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,705] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB kafka | [2024-01-22 09:46:49,706] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,712] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,712] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,712] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0140-toscaparameter.sql kafka | [2024-01-22 09:46:49,712] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,712] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscaparameter kafka | [2024-01-22 09:46:49,720] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,721] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,721] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,721] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0150-toscaproperty.sql kafka | [2024-01-22 09:46:49,721] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,732] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-01-22 09:46:49,732] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,732] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,732] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,733] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-01-22 09:46:49,739] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,739] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-22 09:46:49,739] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,739] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,739] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-01-22 09:46:49,750] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,750] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,750] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,750] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-01-22 09:46:49,750] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,756] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-01-22 09:46:49,757] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,757] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,757] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,757] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-01-22 09:46:49,764] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,764] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,764] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,764] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-01-22 09:46:49,764] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,771] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-01-22 09:46:49,771] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,771] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,771] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,771] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) kafka | [2024-01-22 09:46:49,778] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,778] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql kafka | [2024-01-22 09:46:49,778] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,778] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT kafka | [2024-01-22 09:46:49,778] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,785] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,786] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,786] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-upgrade.sql kafka | [2024-01-22 09:46:49,786] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,786] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | select 'upgrade to 1100 completed' as msg kafka | [2024-01-22 09:46:49,794] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,795] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,796] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-db-migrator | msg kafka | [2024-01-22 09:46:49,796] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | upgrade to 1100 completed kafka | [2024-01-22 09:46:49,796] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,805] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql kafka | [2024-01-22 09:46:49,810] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,810] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME kafka | [2024-01-22 09:46:49,811] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,811] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,818] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,818] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-01-22 09:46:49,819] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,819] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-01-22 09:46:49,819] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,825] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,826] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,826] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-01-22 09:46:49,826] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,827] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-22 09:46:49,833] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,834] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-01-22 09:46:49,834] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,834] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-22 09:46:49,835] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,841] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 09:46:49,844] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,844] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,845] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,845] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-01-22 09:46:49,853] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,854] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-22 09:46:49,854] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,854] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,855] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,862] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-01-22 09:46:49,863] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,863] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,863] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,863] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | TRUNCATE TABLE sequence kafka | [2024-01-22 09:46:49,871] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,872] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,872] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,872] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-01-22 09:46:49,872] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,880] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-01-22 09:46:49,881] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,881] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,881] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,881] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE pdpstatistics kafka | [2024-01-22 09:46:49,887] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,888] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 09:46:49,888] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,888] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql kafka | [2024-01-22 09:46:49,888] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,897] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-01-22 09:46:49,898] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,898] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,898] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,898] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0120-statistics_sequence.sql kafka | [2024-01-22 09:46:49,905] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,905] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- kafka | [2024-01-22 09:46:49,906] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-22 09:46:49,906] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policyadmin: OK: upgrade (1300) kafka | [2024-01-22 09:46:49,906] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | name version kafka | [2024-01-22 09:46:49,913] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policyadmin 1300 kafka | [2024-01-22 09:46:49,915] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-db-migrator | ID script operation from_version to_version tag success atTime kafka | [2024-01-22 09:46:49,915] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:49,915] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:49,915] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(OaftdXmYRROIVmYTHrFpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:49,965] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:49,966] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:49,966] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:49,966] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:49,966] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:50,046] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:50,047] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:50,047] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:15 kafka | [2024-01-22 09:46:50,047] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,047] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,184] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,184] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,184] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,185] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,185] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,262] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,264] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,264] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,264] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,264] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,271] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,272] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,272] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,272] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,272] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,332] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,333] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,333] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,333] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,333] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,371] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,372] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:16 kafka | [2024-01-22 09:46:50,372] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,372] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,373] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,397] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,397] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,397] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-01-22 09:46:50,397] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,398] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,407] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,408] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,409] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,409] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,409] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,417] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,417] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,417] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,417] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,417] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,423] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,423] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,423] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,423] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,424] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,429] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,429] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,429] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:17 kafka | [2024-01-22 09:46:50,429] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,429] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,435] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,435] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,436] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,436] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,436] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,443] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,444] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,444] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,444] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,444] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,455] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,456] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,456] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,456] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,456] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,465] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,466] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,466] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,466] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,466] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(jcMNsJOTTRCJff5JbP9Ksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:18 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,472] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2201240946150800u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:19 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2201240946150900u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2201240946151000u 1 2024-01-22 09:46:20 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2201240946151100u 1 2024-01-22 09:46:21 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2201240946151200u 1 2024-01-22 09:46:21 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2201240946151200u 1 2024-01-22 09:46:21 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2201240946151200u 1 2024-01-22 09:46:21 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2201240946151200u 1 2024-01-22 09:46:21 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2201240946151300u 1 2024-01-22 09:46:21 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2201240946151300u 1 2024-01-22 09:46:21 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2201240946151300u 1 2024-01-22 09:46:21 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-01-22 09:46:50,473] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-01-22 09:46:50,488] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,490] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,492] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,492] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,493] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,493] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,495] INFO [Broker id=1] Finished LeaderAndIsr request in 997ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-01-22 09:46:50,498] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,500] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=jcMNsJOTTRCJff5JbP9Ksg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=OaftdXmYRROIVmYTHrFpHg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-22 09:46:50,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,501] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,501] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,501] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,501] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,501] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,501] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,501] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,501] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,502] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,502] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,502] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,502] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,502] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,502] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,503] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,503] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,503] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,503] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,503] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,503] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,503] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,503] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,504] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,505] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,506] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,507] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-22 09:46:50,507] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-22 09:46:50,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,507] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,508] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,508] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 09:46:50,582] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-79317c40-45e7-478a-a027-76c9ee36ccca and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,593] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 531f91b3-d52b-4d23-8b1f-214c1df749e6 in Empty state. Created a new member id consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3-266a05a0-deeb-4c96-b046-685b0c02ecc9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,604] INFO [GroupCoordinator 1]: Preparing to rebalance group 531f91b3-d52b-4d23-8b1f-214c1df749e6 in state PreparingRebalance with old generation 0 (__consumer_offsets-13) (reason: Adding new member consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3-266a05a0-deeb-4c96-b046-685b0c02ecc9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:50,606] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-79317c40-45e7-478a-a027-76c9ee36ccca with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:53,617] INFO [GroupCoordinator 1]: Stabilized group 531f91b3-d52b-4d23-8b1f-214c1df749e6 generation 1 (__consumer_offsets-13) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:53,625] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:53,644] INFO [GroupCoordinator 1]: Assignment received from leader consumer-531f91b3-d52b-4d23-8b1f-214c1df749e6-3-266a05a0-deeb-4c96-b046-685b0c02ecc9 for group 531f91b3-d52b-4d23-8b1f-214c1df749e6 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:46:53,644] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-79317c40-45e7-478a-a027-76c9ee36ccca for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 09:51:18,817] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-01-22 09:51:18,817] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-01-22 09:51:18,822] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) kafka | [2024-01-22 09:51:18,824] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-pap ... Stopping policy-api ... Stopping kafka ... Stopping compose_zookeeper_1 ... Stopping mariadb ... Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-xacml-pdp ... Removing policy-pap ... Removing policy-api ... Removing policy-db-migrator ... Removing kafka ... Removing compose_zookeeper_1 ... Removing mariadb ... Removing policy-api ... done Removing kafka ... done Removing policy-pap ... done Removing policy-xacml-pdp ... done Removing compose_zookeeper_1 ... done Removing policy-db-migrator ... done Removing mariadb ... done Removing network compose_default ++ cd /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.EfRD3o2gBU ]] + rsync -av /tmp/tmp.EfRD3o2gBU/ /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/archives/xacml-pdp sending incremental file list ./ log.html output.xml report.html testplan.txt sent 599,415 bytes received 95 bytes 1,199,020.00 bytes/sec total size is 598,953 speedup is 1.00 + rm -rf /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/models + exit 3 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2106 killed; [ssh-agent] Stopped. Robot results publisher started... -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins14898747232269626012.sh ---> sysstat.sh [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins1501981269061517488.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/archives/ [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins11909306260234811961.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yvbo from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-yvbo/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins619929757192619508.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/config7035991980116717670tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins7403229673282187856.sh ---> create-netrc.sh [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins10543793763859319178.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yvbo from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-yvbo/bin to PATH [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins1524317180254655689.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins12088269343255416454.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yvbo from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-yvbo/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash -l /tmp/jenkins14709234323331775747.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yvbo from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-yvbo/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/504 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-14134 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 13G 143G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 889 26229 0 5048 30821 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:50:68:53 brd ff:ff:ff:ff:ff:ff inet 10.30.107.20/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85730sec preferred_lft 85730sec inet6 fe80::f816:3eff:fe50:6853/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:4e:d4:d1:d4 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14134) 01/22/24 _x86_64_ (8 CPU) 09:42:41 LINUX RESTART (8 CPU) 09:43:01 tps rtps wtps bread/s bwrtn/s 09:44:01 146.34 71.87 74.47 4633.49 56299.02 09:45:01 103.32 13.80 89.52 1117.55 26483.72 09:46:01 202.33 9.33 193.00 1658.91 95630.12 09:47:01 222.23 11.77 210.47 791.30 38262.60 09:48:01 12.48 0.15 12.33 27.59 15246.52 09:49:01 11.66 0.03 11.63 6.13 14976.60 09:50:01 12.42 0.05 12.37 7.73 15252.40 09:51:01 12.08 0.02 12.06 4.27 15245.05 09:52:01 12.58 0.52 12.06 20.53 14728.03 09:53:01 42.14 0.00 42.14 0.00 14395.63 Average: 77.76 10.75 67.00 826.75 30652.67 09:43:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 09:44:01 30037848 31645784 2901372 8.81 65680 1852736 1511188 4.45 957968 1650988 198264 09:45:01 29838256 31699720 3100964 9.41 83956 2074040 1462084 4.30 878660 1899780 149616 09:46:01 26952544 31644952 5986676 18.17 128584 4749224 1432316 4.21 1031912 4486632 379508 09:47:01 25198404 30063008 7740816 23.50 137548 4897236 6615180 19.46 2784552 4433172 244 09:48:01 25198612 30064236 7740608 23.50 137724 4898068 6579700 19.36 2784844 4432964 124 09:49:01 25186760 30053012 7752460 23.54 137868 4898516 6569712 19.33 2796796 4432960 396 09:50:01 25182724 30049716 7756496 23.55 138048 4899016 6556904 19.29 2800432 4433276 128 09:51:01 25180456 30047740 7758764 23.55 138208 4899140 6568692 19.33 2802220 4433184 248 09:52:01 25144516 30013004 7794704 23.66 138440 4899840 6626608 19.50 2837152 4433128 576 09:53:01 26254156 31144080 6685064 20.30 139304 4926008 2333568 6.87 1759536 4430712 256 Average: 26417428 30642525 6521792 19.80 124536 4299382 4625595 13.61 2143407 3906680 72936 09:43:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:44:01 ens3 325.43 243.18 1473.71 71.95 0.00 0.00 0.00 0.00 09:44:01 lo 1.33 1.33 0.14 0.14 0.00 0.00 0.00 0.00 09:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:45:01 ens3 58.82 43.56 800.31 7.97 0.00 0.00 0.00 0.00 09:45:01 lo 1.53 1.53 0.16 0.16 0.00 0.00 0.00 0.00 09:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:46:01 br-93520c8679be 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:46:01 ens3 799.67 448.68 21394.87 34.57 0.00 0.00 0.00 0.00 09:46:01 lo 9.60 9.60 0.94 0.94 0.00 0.00 0.00 0.00 09:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:47:01 veth5568bfc 26.97 25.17 11.56 16.28 0.00 0.00 0.00 0.00 09:47:01 veth9dda394 55.22 66.57 19.32 16.31 0.00 0.00 0.00 0.00 09:47:01 veth5516c3d 5.63 6.93 0.87 0.98 0.00 0.00 0.00 0.00 09:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:48:01 veth5568bfc 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 09:48:01 veth9dda394 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 09:48:01 veth5516c3d 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 09:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:49:01 veth5568bfc 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 09:49:01 veth9dda394 0.22 0.28 0.11 0.05 0.00 0.00 0.00 0.00 09:49:01 veth5516c3d 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 09:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:01 veth5568bfc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:01 veth9dda394 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:01 veth5516c3d 0.17 0.33 0.01 0.02 0.00 0.00 0.00 0.00 09:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:51:01 veth5568bfc 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 09:51:01 veth9dda394 0.22 0.27 0.11 0.05 0.00 0.00 0.00 0.00 09:51:01 veth5516c3d 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 09:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:52:01 veth5568bfc 19.45 16.75 4.96 15.65 0.00 0.00 0.00 0.00 09:52:01 veth9dda394 16.48 19.25 15.17 4.76 0.00 0.00 0.00 0.00 09:52:01 veth5516c3d 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 09:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:53:01 veth5568bfc 0.32 0.27 0.02 0.02 0.00 0.00 0.00 0.00 09:53:01 br-93520c8679be 0.63 0.63 0.20 0.50 0.00 0.00 0.00 0.00 09:53:01 ens3 1277.54 789.97 23759.38 139.85 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: veth5568bfc 4.67 4.22 1.65 3.19 0.00 0.00 0.00 0.00 Average: br-93520c8679be 0.06 0.06 0.02 0.05 0.00 0.00 0.00 0.00 Average: ens3 125.09 78.20 2369.13 13.89 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14134) 01/22/24 _x86_64_ (8 CPU) 09:42:41 LINUX RESTART (8 CPU) 09:43:01 CPU %user %nice %system %iowait %steal %idle 09:44:01 all 9.51 0.00 1.18 5.49 0.05 83.78 09:44:01 0 2.61 0.00 0.92 27.68 0.03 68.75 09:44:01 1 3.13 0.00 0.71 10.33 0.05 85.78 09:44:01 2 3.04 0.00 0.87 0.38 0.02 95.69 09:44:01 3 4.46 0.00 1.55 0.23 0.07 93.68 09:44:01 4 3.94 0.00 0.82 0.48 0.03 94.72 09:44:01 5 37.26 0.00 2.30 2.82 0.08 57.54 09:44:01 6 11.06 0.00 1.20 0.12 0.03 87.59 09:44:01 7 10.75 0.00 1.07 1.84 0.03 86.31 09:45:01 all 9.82 0.00 0.67 3.15 0.03 86.33 09:45:01 0 6.44 0.00 0.63 10.88 0.03 82.02 09:45:01 1 11.31 0.00 0.64 0.17 0.02 87.87 09:45:01 2 2.52 0.00 0.32 1.10 0.02 96.05 09:45:01 3 14.49 0.00 0.92 8.03 0.08 76.48 09:45:01 4 4.70 0.00 0.52 0.13 0.03 94.61 09:45:01 5 3.05 0.00 0.30 0.08 0.02 96.55 09:45:01 6 14.59 0.00 0.84 2.23 0.02 82.33 09:45:01 7 21.45 0.00 1.20 2.56 0.05 74.75 09:46:01 all 10.86 0.00 3.94 5.56 0.08 79.56 09:46:01 0 8.52 0.00 3.65 12.14 0.07 75.61 09:46:01 1 12.15 0.00 2.93 0.51 0.08 84.33 09:46:01 2 10.72 0.00 3.00 0.56 0.07 85.65 09:46:01 3 9.85 0.00 5.02 0.49 0.07 84.58 09:46:01 4 11.43 0.00 3.65 0.20 0.08 84.63 09:46:01 5 11.16 0.00 3.77 0.14 0.07 84.87 09:46:01 6 11.80 0.00 4.11 1.00 0.08 83.01 09:46:01 7 11.26 0.00 5.42 29.56 0.08 53.68 09:47:01 all 23.59 0.00 3.39 3.57 0.09 69.36 09:47:01 0 21.25 0.00 2.79 1.90 0.07 74.00 09:47:01 1 26.63 0.00 3.68 0.55 0.10 69.03 09:47:01 2 25.40 0.00 3.50 1.82 0.08 69.20 09:47:01 3 18.18 0.00 2.68 2.29 0.07 76.78 09:47:01 4 17.62 0.00 2.64 4.38 0.08 75.28 09:47:01 5 29.63 0.00 4.48 3.15 0.10 62.64 09:47:01 6 31.09 0.00 4.24 0.52 0.10 64.05 09:47:01 7 19.01 0.00 3.06 13.95 0.10 63.88 09:48:01 all 0.70 0.00 0.13 1.30 0.04 97.83 09:48:01 0 0.65 0.00 0.13 0.00 0.03 99.18 09:48:01 1 0.75 0.00 0.20 0.05 0.03 98.96 09:48:01 2 0.60 0.00 0.12 0.07 0.03 99.18 09:48:01 3 0.77 0.00 0.13 0.00 0.07 99.03 09:48:01 4 0.67 0.00 0.18 0.05 0.05 99.05 09:48:01 5 0.48 0.00 0.07 10.02 0.03 89.40 09:48:01 6 0.30 0.00 0.10 0.18 0.00 99.42 09:48:01 7 1.37 0.00 0.12 0.00 0.05 98.46 09:49:01 all 0.84 0.00 0.16 1.63 0.03 97.35 09:49:01 0 0.48 0.00 0.20 0.00 0.03 99.28 09:49:01 1 0.65 0.00 0.08 0.02 0.03 99.22 09:49:01 2 0.74 0.00 0.13 0.00 0.02 99.11 09:49:01 3 0.93 0.00 0.15 0.20 0.02 98.70 09:49:01 4 0.90 0.00 0.10 0.00 0.03 98.96 09:49:01 5 0.87 0.00 0.33 12.55 0.02 86.24 09:49:01 6 0.90 0.00 0.13 0.12 0.02 98.83 09:49:01 7 1.21 0.00 0.22 0.13 0.03 98.40 09:50:01 all 0.49 0.00 0.13 1.34 0.03 98.02 09:50:01 0 0.50 0.00 0.08 0.00 0.00 99.42 09:50:01 1 0.98 0.00 0.08 0.00 0.02 98.92 09:50:01 2 0.20 0.00 0.10 0.05 0.02 99.63 09:50:01 3 0.59 0.00 0.12 0.13 0.05 99.11 09:50:01 4 0.28 0.00 0.13 0.08 0.05 99.45 09:50:01 5 0.38 0.00 0.15 10.45 0.02 88.99 09:50:01 6 0.15 0.00 0.12 0.00 0.02 99.72 09:50:01 7 0.83 0.00 0.22 0.00 0.03 98.92 09:51:01 all 0.44 0.00 0.14 1.37 0.04 98.02 09:51:01 0 0.93 0.00 0.13 0.00 0.05 98.88 09:51:01 1 0.27 0.00 0.12 0.13 0.03 99.45 09:51:01 2 0.30 0.00 0.10 0.02 0.05 99.53 09:51:01 3 0.42 0.00 0.15 0.02 0.05 99.37 09:51:01 4 0.28 0.00 0.07 0.00 0.03 99.62 09:51:01 5 0.28 0.00 0.18 10.83 0.02 88.69 09:51:01 6 0.35 0.00 0.12 0.00 0.03 99.50 09:51:01 7 0.65 0.00 0.22 0.00 0.03 99.10 09:52:01 all 1.84 0.00 0.26 1.81 0.04 96.05 09:52:01 0 1.78 0.00 0.33 0.25 0.03 97.60 09:52:01 1 2.47 0.00 0.22 0.17 0.02 97.13 09:52:01 2 0.52 0.00 0.13 0.00 0.05 99.30 09:52:01 3 2.98 0.00 0.23 0.10 0.05 96.64 09:52:01 4 0.53 0.00 0.15 0.03 0.03 99.25 09:52:01 5 2.25 0.00 0.30 13.84 0.03 83.57 09:52:01 6 2.14 0.00 0.40 0.02 0.05 97.40 09:52:01 7 2.05 0.00 0.30 0.12 0.03 97.50 09:53:01 all 1.12 0.00 0.33 1.39 0.03 97.13 09:53:01 0 2.35 0.00 0.38 0.00 0.02 97.25 09:53:01 1 0.48 0.00 0.25 0.02 0.02 99.23 09:53:01 2 1.19 0.00 0.45 0.12 0.05 98.20 09:53:01 3 0.77 0.00 0.40 0.03 0.02 98.78 09:53:01 4 1.66 0.00 0.35 0.40 0.03 97.56 09:53:01 5 0.82 0.00 0.27 9.66 0.03 89.22 09:53:01 6 0.80 0.00 0.25 0.03 0.05 98.86 09:53:01 7 0.89 0.00 0.23 0.82 0.03 98.03 Average: all 5.90 0.00 1.03 2.66 0.04 90.37 Average: 0 4.54 0.00 0.92 5.27 0.04 89.23 Average: 1 5.87 0.00 0.89 1.20 0.04 92.01 Average: 2 4.50 0.00 0.87 0.41 0.04 94.18 Average: 3 5.33 0.00 1.13 1.15 0.05 92.34 Average: 4 4.18 0.00 0.86 0.57 0.05 94.34 Average: 5 8.58 0.00 1.21 7.37 0.04 82.80 Average: 6 7.28 0.00 1.14 0.42 0.04 91.12 Average: 7 6.93 0.00 1.20 4.85 0.05 86.98