16:57:33 Started by upstream project "policy-clamp-master-merge-java" build number 640 16:57:33 originally caused by: 16:57:33 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/clamp/+/137246 16:57:33 Running as SYSTEM 16:57:33 [EnvInject] - Loading node environment variables. 16:57:33 Building remotely on prd-ubuntu1804-docker-8c-8g-5887 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-clamp-master-project-csit-clamp 16:57:33 [ssh-agent] Looking for ssh-agent implementation... 16:57:34 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 16:57:34 $ ssh-agent 16:57:34 SSH_AUTH_SOCK=/tmp/ssh-PecUDMnLK60b/agent.2142 16:57:34 SSH_AGENT_PID=2143 16:57:34 [ssh-agent] Started. 16:57:34 Running ssh-add (command line suppressed) 16:57:34 Identity added: /w/workspace/policy-clamp-master-project-csit-clamp@tmp/private_key_15909279400224558161.key (/w/workspace/policy-clamp-master-project-csit-clamp@tmp/private_key_15909279400224558161.key) 16:57:34 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 16:57:34 The recommended git tool is: NONE 16:57:35 using credential onap-jenkins-ssh 16:57:35 Wiping out workspace first. 16:57:35 Cloning the remote Git repository 16:57:35 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 16:57:35 > git init /w/workspace/policy-clamp-master-project-csit-clamp # timeout=10 16:57:35 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 16:57:35 > git --version # timeout=10 16:57:35 > git --version # 'git version 2.17.1' 16:57:35 using GIT_SSH to set credentials Gerrit user 16:57:35 Verifying host key using manually-configured host key entries 16:57:35 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 16:57:36 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 16:57:36 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 16:57:36 Avoid second fetch 16:57:36 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 16:57:36 Checking out Revision dd836dc2d2bd379fba19b395c912d32f1bc7ee38 (refs/remotes/origin/master) 16:57:36 > git config core.sparsecheckout # timeout=10 16:57:36 > git checkout -f dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=30 16:57:37 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" 16:57:37 > git rev-list --no-walk dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=10 16:57:37 provisioning config files... 16:57:37 copy managed file [npmrc] to file:/home/jenkins/.npmrc 16:57:37 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 16:57:37 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins16153457043047609712.sh 16:57:37 ---> python-tools-install.sh 16:57:37 Setup pyenv: 16:57:37 * system (set by /opt/pyenv/version) 16:57:37 * 3.8.13 (set by /opt/pyenv/version) 16:57:37 * 3.9.13 (set by /opt/pyenv/version) 16:57:37 * 3.10.6 (set by /opt/pyenv/version) 16:57:42 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-Qefd 16:57:42 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 16:57:45 lf-activate-venv(): INFO: Installing: lftools 16:58:25 lf-activate-venv(): INFO: Adding /tmp/venv-Qefd/bin to PATH 16:58:25 Generating Requirements File 16:59:05 Python 3.10.6 16:59:05 pip 24.0 from /tmp/venv-Qefd/lib/python3.10/site-packages/pip (python 3.10) 16:59:05 appdirs==1.4.4 16:59:05 argcomplete==3.2.2 16:59:05 aspy.yaml==1.3.0 16:59:05 attrs==23.2.0 16:59:05 autopage==0.5.2 16:59:05 beautifulsoup4==4.12.3 16:59:05 boto3==1.34.43 16:59:05 botocore==1.34.43 16:59:05 bs4==0.0.2 16:59:05 cachetools==5.3.2 16:59:05 certifi==2024.2.2 16:59:05 cffi==1.16.0 16:59:05 cfgv==3.4.0 16:59:05 chardet==5.2.0 16:59:05 charset-normalizer==3.3.2 16:59:05 click==8.1.7 16:59:05 cliff==4.5.0 16:59:05 cmd2==2.4.3 16:59:05 cryptography==3.3.2 16:59:05 debtcollector==2.5.0 16:59:05 decorator==5.1.1 16:59:05 defusedxml==0.7.1 16:59:05 Deprecated==1.2.14 16:59:05 distlib==0.3.8 16:59:05 dnspython==2.5.0 16:59:05 docker==4.2.2 16:59:05 dogpile.cache==1.3.1 16:59:05 email-validator==2.1.0.post1 16:59:05 filelock==3.13.1 16:59:05 future==0.18.3 16:59:05 gitdb==4.0.11 16:59:05 GitPython==3.1.42 16:59:05 google-auth==2.28.0 16:59:05 httplib2==0.22.0 16:59:05 identify==2.5.34 16:59:05 idna==3.6 16:59:05 importlib-resources==1.5.0 16:59:05 iso8601==2.1.0 16:59:05 Jinja2==3.1.3 16:59:05 jmespath==1.0.1 16:59:05 jsonpatch==1.33 16:59:05 jsonpointer==2.4 16:59:05 jsonschema==4.21.1 16:59:05 jsonschema-specifications==2023.12.1 16:59:05 keystoneauth1==5.5.0 16:59:05 kubernetes==29.0.0 16:59:05 lftools==0.37.8 16:59:05 lxml==5.1.0 16:59:05 MarkupSafe==2.1.5 16:59:05 msgpack==1.0.7 16:59:05 multi_key_dict==2.0.3 16:59:05 munch==4.0.0 16:59:05 netaddr==1.1.0 16:59:05 netifaces==0.11.0 16:59:05 niet==1.4.2 16:59:05 nodeenv==1.8.0 16:59:05 oauth2client==4.1.3 16:59:05 oauthlib==3.2.2 16:59:05 openstacksdk==0.62.0 16:59:05 os-client-config==2.1.0 16:59:05 os-service-types==1.7.0 16:59:05 osc-lib==3.0.0 16:59:05 oslo.config==9.3.0 16:59:05 oslo.context==5.3.0 16:59:05 oslo.i18n==6.2.0 16:59:05 oslo.log==5.4.0 16:59:05 oslo.serialization==5.3.0 16:59:05 oslo.utils==7.0.0 16:59:05 packaging==23.2 16:59:05 pbr==6.0.0 16:59:05 platformdirs==4.2.0 16:59:05 prettytable==3.9.0 16:59:05 pyasn1==0.5.1 16:59:05 pyasn1-modules==0.3.0 16:59:05 pycparser==2.21 16:59:05 pygerrit2==2.0.15 16:59:05 PyGithub==2.2.0 16:59:05 pyinotify==0.9.6 16:59:05 PyJWT==2.8.0 16:59:05 PyNaCl==1.5.0 16:59:05 pyparsing==2.4.7 16:59:05 pyperclip==1.8.2 16:59:05 pyrsistent==0.20.0 16:59:05 python-cinderclient==9.4.0 16:59:05 python-dateutil==2.8.2 16:59:05 python-heatclient==3.4.0 16:59:05 python-jenkins==1.8.2 16:59:05 python-keystoneclient==5.3.0 16:59:05 python-magnumclient==4.3.0 16:59:05 python-novaclient==18.4.0 16:59:05 python-openstackclient==6.0.1 16:59:05 python-swiftclient==4.4.0 16:59:05 pytz==2024.1 16:59:05 PyYAML==6.0.1 16:59:05 referencing==0.33.0 16:59:05 requests==2.31.0 16:59:05 requests-oauthlib==1.3.1 16:59:05 requestsexceptions==1.4.0 16:59:05 rfc3986==2.0.0 16:59:05 rpds-py==0.18.0 16:59:05 rsa==4.9 16:59:05 ruamel.yaml==0.18.6 16:59:05 ruamel.yaml.clib==0.2.8 16:59:05 s3transfer==0.10.0 16:59:05 simplejson==3.19.2 16:59:05 six==1.16.0 16:59:05 smmap==5.0.1 16:59:05 soupsieve==2.5 16:59:05 stevedore==5.1.0 16:59:05 tabulate==0.9.0 16:59:05 toml==0.10.2 16:59:05 tomlkit==0.12.3 16:59:05 tqdm==4.66.2 16:59:05 typing_extensions==4.9.0 16:59:05 tzdata==2024.1 16:59:05 urllib3==1.26.18 16:59:05 virtualenv==20.25.0 16:59:05 wcwidth==0.2.13 16:59:05 websocket-client==1.7.0 16:59:05 wrapt==1.16.0 16:59:05 xdg==6.0.0 16:59:05 xmltodict==0.13.0 16:59:05 yq==3.2.3 16:59:05 [EnvInject] - Injecting environment variables from a build step. 16:59:05 [EnvInject] - Injecting as environment variables the properties content 16:59:05 SET_JDK_VERSION=openjdk17 16:59:05 GIT_URL="git://cloud.onap.org/mirror" 16:59:05 16:59:05 [EnvInject] - Variables injected successfully. 16:59:05 [policy-clamp-master-project-csit-clamp] $ /bin/sh /tmp/jenkins1238684680695646207.sh 16:59:05 ---> update-java-alternatives.sh 16:59:06 ---> Updating Java version 16:59:06 ---> Ubuntu/Debian system detected 16:59:06 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 16:59:06 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 16:59:06 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 16:59:06 openjdk version "17.0.4" 2022-07-19 16:59:06 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 16:59:06 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 16:59:06 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 16:59:07 [EnvInject] - Injecting environment variables from a build step. 16:59:07 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 16:59:07 [EnvInject] - Variables injected successfully. 16:59:07 [policy-clamp-master-project-csit-clamp] $ /bin/sh -xe /tmp/jenkins9151122081370211173.sh 16:59:07 + /w/workspace/policy-clamp-master-project-csit-clamp/csit/run-project-csit.sh clamp 16:59:07 + set +u 16:59:07 + save_set 16:59:07 + RUN_CSIT_SAVE_SET=ehxB 16:59:07 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 16:59:07 + '[' 1 -eq 0 ']' 16:59:07 + '[' -z /w/workspace/policy-clamp-master-project-csit-clamp ']' 16:59:07 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin 16:59:07 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin 16:59:07 + export SCRIPTS=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts 16:59:07 + SCRIPTS=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts 16:59:07 + export ROBOT_VARIABLES= 16:59:07 + ROBOT_VARIABLES= 16:59:07 + export PROJECT=clamp 16:59:07 + PROJECT=clamp 16:59:07 + cd /w/workspace/policy-clamp-master-project-csit-clamp 16:59:07 + rm -rf /w/workspace/policy-clamp-master-project-csit-clamp/csit/archives/clamp 16:59:07 + mkdir -p /w/workspace/policy-clamp-master-project-csit-clamp/csit/archives/clamp 16:59:07 + source_safely /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/prepare-robot-env.sh 16:59:07 + '[' -z /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/prepare-robot-env.sh ']' 16:59:07 + relax_set 16:59:07 + set +e 16:59:07 + set +o pipefail 16:59:07 + . /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/prepare-robot-env.sh 16:59:07 ++ '[' -z /w/workspace/policy-clamp-master-project-csit-clamp ']' 16:59:07 +++ mktemp -d 16:59:07 ++ ROBOT_VENV=/tmp/tmp.hhYk7701Ay 16:59:07 ++ echo ROBOT_VENV=/tmp/tmp.hhYk7701Ay 16:59:07 +++ python3 --version 16:59:07 ++ echo 'Python version is: Python 3.6.9' 16:59:07 Python version is: Python 3.6.9 16:59:07 ++ python3 -m venv --clear /tmp/tmp.hhYk7701Ay 16:59:08 ++ source /tmp/tmp.hhYk7701Ay/bin/activate 16:59:08 +++ deactivate nondestructive 16:59:08 +++ '[' -n '' ']' 16:59:08 +++ '[' -n '' ']' 16:59:08 +++ '[' -n /bin/bash -o -n '' ']' 16:59:08 +++ hash -r 16:59:08 +++ '[' -n '' ']' 16:59:08 +++ unset VIRTUAL_ENV 16:59:08 +++ '[' '!' nondestructive = nondestructive ']' 16:59:08 +++ VIRTUAL_ENV=/tmp/tmp.hhYk7701Ay 16:59:08 +++ export VIRTUAL_ENV 16:59:08 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin 16:59:08 +++ PATH=/tmp/tmp.hhYk7701Ay/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin 16:59:08 +++ export PATH 16:59:08 +++ '[' -n '' ']' 16:59:08 +++ '[' -z '' ']' 16:59:08 +++ _OLD_VIRTUAL_PS1= 16:59:08 +++ '[' 'x(tmp.hhYk7701Ay) ' '!=' x ']' 16:59:08 +++ PS1='(tmp.hhYk7701Ay) ' 16:59:08 +++ export PS1 16:59:08 +++ '[' -n /bin/bash -o -n '' ']' 16:59:08 +++ hash -r 16:59:08 ++ set -exu 16:59:08 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 16:59:12 ++ echo 'Installing Python Requirements' 16:59:12 Installing Python Requirements 16:59:12 ++ python3 -m pip install -qq -r /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/pylibs.txt 16:59:33 ++ python3 -m pip -qq freeze 16:59:33 bcrypt==4.0.1 16:59:33 beautifulsoup4==4.12.3 16:59:33 bitarray==2.9.2 16:59:33 certifi==2024.2.2 16:59:33 cffi==1.15.1 16:59:33 charset-normalizer==2.0.12 16:59:33 cryptography==40.0.2 16:59:33 decorator==5.1.1 16:59:33 elasticsearch==7.17.9 16:59:33 elasticsearch-dsl==7.4.1 16:59:33 enum34==1.1.10 16:59:33 idna==3.6 16:59:33 importlib-resources==5.4.0 16:59:33 ipaddr==2.2.0 16:59:33 isodate==0.6.1 16:59:33 jmespath==0.10.0 16:59:33 jsonpatch==1.32 16:59:33 jsonpath-rw==1.4.0 16:59:33 jsonpointer==2.3 16:59:33 lxml==5.1.0 16:59:33 netaddr==0.8.0 16:59:33 netifaces==0.11.0 16:59:33 odltools==0.1.28 16:59:33 paramiko==3.4.0 16:59:33 pkg_resources==0.0.0 16:59:33 ply==3.11 16:59:33 pyang==2.6.0 16:59:33 pyangbind==0.8.1 16:59:33 pycparser==2.21 16:59:33 pyhocon==0.3.60 16:59:33 PyNaCl==1.5.0 16:59:33 pyparsing==3.1.1 16:59:33 python-dateutil==2.8.2 16:59:33 regex==2023.8.8 16:59:33 requests==2.27.1 16:59:33 robotframework==6.1.1 16:59:33 robotframework-httplibrary==0.4.2 16:59:33 robotframework-pythonlibcore==3.0.0 16:59:33 robotframework-requests==0.9.4 16:59:33 robotframework-selenium2library==3.0.0 16:59:33 robotframework-seleniumlibrary==5.1.3 16:59:33 robotframework-sshlibrary==3.8.0 16:59:33 scapy==2.5.0 16:59:33 scp==0.14.5 16:59:33 selenium==3.141.0 16:59:33 six==1.16.0 16:59:33 soupsieve==2.3.2.post1 16:59:33 urllib3==1.26.18 16:59:33 waitress==2.0.0 16:59:33 WebOb==1.8.7 16:59:33 WebTest==3.0.0 16:59:33 zipp==3.6.0 16:59:33 ++ mkdir -p /tmp/tmp.hhYk7701Ay/src/onap 16:59:33 ++ rm -rf /tmp/tmp.hhYk7701Ay/src/onap/testsuite 16:59:33 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 16:59:40 ++ echo 'Installing python confluent-kafka library' 16:59:40 Installing python confluent-kafka library 16:59:40 ++ python3 -m pip install -qq confluent-kafka 16:59:42 ++ echo 'Uninstall docker-py and reinstall docker.' 16:59:42 Uninstall docker-py and reinstall docker. 16:59:42 ++ python3 -m pip uninstall -y -qq docker 16:59:42 ++ python3 -m pip install -U -qq docker 16:59:44 ++ python3 -m pip -qq freeze 16:59:44 bcrypt==4.0.1 16:59:44 beautifulsoup4==4.12.3 16:59:44 bitarray==2.9.2 16:59:44 certifi==2024.2.2 16:59:44 cffi==1.15.1 16:59:44 charset-normalizer==2.0.12 16:59:44 confluent-kafka==2.3.0 16:59:44 cryptography==40.0.2 16:59:44 decorator==5.1.1 16:59:44 deepdiff==5.7.0 16:59:44 dnspython==2.2.1 16:59:44 docker==5.0.3 16:59:44 elasticsearch==7.17.9 16:59:44 elasticsearch-dsl==7.4.1 16:59:44 enum34==1.1.10 16:59:44 future==0.18.3 16:59:44 idna==3.6 16:59:44 importlib-resources==5.4.0 16:59:44 ipaddr==2.2.0 16:59:44 isodate==0.6.1 16:59:44 Jinja2==3.0.3 16:59:44 jmespath==0.10.0 16:59:44 jsonpatch==1.32 16:59:44 jsonpath-rw==1.4.0 16:59:44 jsonpointer==2.3 16:59:44 kafka-python==2.0.2 16:59:44 lxml==5.1.0 16:59:44 MarkupSafe==2.0.1 16:59:44 more-itertools==5.0.0 16:59:44 netaddr==0.8.0 16:59:44 netifaces==0.11.0 16:59:44 odltools==0.1.28 16:59:44 ordered-set==4.0.2 16:59:44 paramiko==3.4.0 16:59:44 pbr==6.0.0 16:59:44 pkg_resources==0.0.0 16:59:44 ply==3.11 16:59:44 protobuf==3.19.6 16:59:44 pyang==2.6.0 16:59:44 pyangbind==0.8.1 16:59:44 pycparser==2.21 16:59:44 pyhocon==0.3.60 16:59:44 PyNaCl==1.5.0 16:59:44 pyparsing==3.1.1 16:59:44 python-dateutil==2.8.2 16:59:44 PyYAML==6.0.1 16:59:44 regex==2023.8.8 16:59:44 requests==2.27.1 16:59:44 robotframework==6.1.1 16:59:44 robotframework-httplibrary==0.4.2 16:59:44 robotframework-onap==0.6.0.dev105 16:59:44 robotframework-pythonlibcore==3.0.0 16:59:44 robotframework-requests==0.9.4 16:59:44 robotframework-selenium2library==3.0.0 16:59:44 robotframework-seleniumlibrary==5.1.3 16:59:44 robotframework-sshlibrary==3.8.0 16:59:44 robotlibcore-temp==1.0.2 16:59:44 scapy==2.5.0 16:59:44 scp==0.14.5 16:59:44 selenium==3.141.0 16:59:44 six==1.16.0 16:59:44 soupsieve==2.3.2.post1 16:59:44 urllib3==1.26.18 16:59:44 waitress==2.0.0 16:59:44 WebOb==1.8.7 16:59:44 websocket-client==1.3.1 16:59:44 WebTest==3.0.0 16:59:44 zipp==3.6.0 16:59:44 ++ uname 16:59:44 ++ grep -q Linux 16:59:44 ++ sudo apt-get -y -qq install libxml2-utils 16:59:44 + load_set 16:59:44 + _setopts=ehuxB 16:59:44 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 16:59:44 ++ tr : ' ' 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o braceexpand 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o hashall 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o interactive-comments 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o nounset 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o xtrace 16:59:44 ++ echo ehuxB 16:59:44 ++ sed 's/./& /g' 16:59:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:59:44 + set +e 16:59:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:59:44 + set +h 16:59:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:59:44 + set +u 16:59:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:59:44 + set +x 16:59:44 + source_safely /tmp/tmp.hhYk7701Ay/bin/activate 16:59:44 + '[' -z /tmp/tmp.hhYk7701Ay/bin/activate ']' 16:59:44 + relax_set 16:59:44 + set +e 16:59:44 + set +o pipefail 16:59:44 + . /tmp/tmp.hhYk7701Ay/bin/activate 16:59:44 ++ deactivate nondestructive 16:59:44 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin ']' 16:59:44 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin 16:59:44 ++ export PATH 16:59:44 ++ unset _OLD_VIRTUAL_PATH 16:59:44 ++ '[' -n '' ']' 16:59:44 ++ '[' -n /bin/bash -o -n '' ']' 16:59:44 ++ hash -r 16:59:44 ++ '[' -n '' ']' 16:59:44 ++ unset VIRTUAL_ENV 16:59:44 ++ '[' '!' nondestructive = nondestructive ']' 16:59:44 ++ VIRTUAL_ENV=/tmp/tmp.hhYk7701Ay 16:59:44 ++ export VIRTUAL_ENV 16:59:44 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin 16:59:44 ++ PATH=/tmp/tmp.hhYk7701Ay/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin 16:59:44 ++ export PATH 16:59:44 ++ '[' -n '' ']' 16:59:44 ++ '[' -z '' ']' 16:59:44 ++ _OLD_VIRTUAL_PS1='(tmp.hhYk7701Ay) ' 16:59:44 ++ '[' 'x(tmp.hhYk7701Ay) ' '!=' x ']' 16:59:44 ++ PS1='(tmp.hhYk7701Ay) (tmp.hhYk7701Ay) ' 16:59:44 ++ export PS1 16:59:44 ++ '[' -n /bin/bash -o -n '' ']' 16:59:44 ++ hash -r 16:59:44 + load_set 16:59:44 + _setopts=hxB 16:59:44 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:59:44 ++ tr : ' ' 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o braceexpand 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o hashall 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o interactive-comments 16:59:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:59:44 + set +o xtrace 16:59:44 ++ echo hxB 16:59:44 ++ sed 's/./& /g' 16:59:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:59:44 + set +h 16:59:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:59:44 + set +x 16:59:44 + export TEST_PLAN_DIR=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests 16:59:44 + TEST_PLAN_DIR=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests 16:59:44 + export TEST_OPTIONS= 16:59:44 + TEST_OPTIONS= 16:59:44 ++ mktemp -d 16:59:44 + WORKDIR=/tmp/tmp.0Q219FRrYS 16:59:44 + cd /tmp/tmp.0Q219FRrYS 16:59:44 + docker login -u docker -p docker nexus3.onap.org:10001 16:59:45 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 16:59:45 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 16:59:45 Configure a credential helper to remove this warning. See 16:59:45 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 16:59:45 16:59:45 Login Succeeded 16:59:45 + SETUP=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh 16:59:45 + '[' -f /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh ']' 16:59:45 + echo 'Running setup script /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh' 16:59:45 Running setup script /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh 16:59:45 + source_safely /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh 16:59:45 + '[' -z /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh ']' 16:59:45 + relax_set 16:59:45 + set +e 16:59:45 + set +o pipefail 16:59:45 + . /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh 16:59:45 ++ source /w/workspace/policy-clamp-master-project-csit-clamp/compose/start-compose.sh policy-clamp-runtime-acm 16:59:45 +++ '[' -z /w/workspace/policy-clamp-master-project-csit-clamp ']' 16:59:45 +++ COMPOSE_FOLDER=/w/workspace/policy-clamp-master-project-csit-clamp/compose 16:59:45 +++ grafana=false 16:59:45 +++ gui=false 16:59:45 +++ [[ 1 -gt 0 ]] 16:59:45 +++ key=policy-clamp-runtime-acm 16:59:45 +++ case $key in 16:59:45 +++ echo policy-clamp-runtime-acm 16:59:45 policy-clamp-runtime-acm 16:59:45 +++ component=policy-clamp-runtime-acm 16:59:45 +++ shift 16:59:45 +++ [[ 0 -gt 0 ]] 16:59:45 +++ cd /w/workspace/policy-clamp-master-project-csit-clamp/compose 16:59:45 +++ echo 'Configuring docker compose...' 16:59:45 Configuring docker compose... 16:59:45 +++ source export-ports.sh 16:59:45 +++ source get-versions.sh 16:59:47 +++ '[' -z clamp ']' 16:59:47 +++ '[' -n policy-clamp-runtime-acm ']' 16:59:47 +++ '[' policy-clamp-runtime-acm == logs ']' 16:59:47 +++ '[' false = true ']' 16:59:47 +++ '[' false = true ']' 16:59:47 +++ echo 'Starting policy-clamp-runtime-acm application' 16:59:47 Starting policy-clamp-runtime-acm application 16:59:47 +++ docker-compose up -d policy-clamp-runtime-acm 16:59:48 Creating network "compose_default" with the default driver 16:59:48 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 16:59:48 10.10.2: Pulling from mariadb 16:59:54 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 16:59:54 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 16:59:54 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 16:59:54 latest: Pulling from confluentinc/cp-zookeeper 17:00:09 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 17:00:09 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 17:00:09 Pulling kafka (confluentinc/cp-kafka:latest)... 17:00:11 latest: Pulling from confluentinc/cp-kafka 17:00:16 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 17:00:16 Status: Downloaded newer image for confluentinc/cp-kafka:latest 17:00:16 Pulling policy-clamp-ac-sim-ppnt (nexus3.onap.org:10001/onap/policy-clamp-ac-sim-ppnt:7.1.1-SNAPSHOT)... 17:00:16 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-ac-sim-ppnt 17:00:20 Digest: sha256:bd70965fec25762cd3551e547da573fa23788244542040c9855a4ebf0a655262 17:00:20 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-ac-sim-ppnt:7.1.1-SNAPSHOT 17:00:20 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 17:00:20 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator 17:00:32 Digest: sha256:6dc9b5d15d5c92b51ee9067496c5209e4419813b605f45e6e3ce7c61cbd0cf2d 17:00:32 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT 17:00:32 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 17:00:32 3.1.1-SNAPSHOT: Pulling from onap/policy-api 17:00:45 Digest: sha256:eb5d7fea250b871a2b19a735fdda30ee8abdd061ba9bd10eaf9a1e0174cfa7b2 17:00:46 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT 17:00:46 Pulling policy-clamp-ac-pf-ppnt (nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:7.1.1-SNAPSHOT)... 17:00:57 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-ac-pf-ppnt 17:01:08 Digest: sha256:549067aeacac2585a4af39c1cb57406d5e998573ca3f202c32eb6098b6bbe3d9 17:01:08 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:7.1.1-SNAPSHOT 17:01:08 Pulling policy-clamp-ac-k8s-ppnt (nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:7.1.1-SNAPSHOT)... 17:01:09 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-ac-k8s-ppnt 17:01:12 Digest: sha256:28f8e736c871a23f1e8a252ab4c28f42d3c46837cd9a1aab9a642f8dd6f8115c 17:01:12 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:7.1.1-SNAPSHOT 17:01:12 Pulling policy-clamp-ac-http-ppnt (nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:7.1.1-SNAPSHOT)... 17:01:12 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-ac-http-ppnt 17:01:15 Digest: sha256:34cdf39fffeb63d7d3b0054b3329bc37bef10ec9c466bccdae7b04146ad7e5cd 17:01:15 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:7.1.1-SNAPSHOT 17:01:15 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 17:01:15 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator 17:01:18 Digest: sha256:191ea80d58976372d6ed1c0c58381553b1e255dde7f5cbf6557b43cee2dc0cb8 17:01:18 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT 17:01:18 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 17:01:18 3.1.1-SNAPSHOT: Pulling from onap/policy-pap 17:01:21 Digest: sha256:6f65ebb517d097077c06300f7875917002962059c8bced95472abcc16576545d 17:01:21 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT 17:01:21 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 17:01:22 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp 17:01:29 Digest: sha256:4984f5fe593948014caebf59b85572848ecc1c75f1e549aa260b2081ca9ce66a 17:01:29 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT 17:01:29 Pulling policy-clamp-runtime-acm (nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:7.1.1-SNAPSHOT)... 17:01:30 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-runtime-acm 17:01:32 Digest: sha256:f8208faefad2f9a270f3af3a69ce3d357a5d518099ecbef5a4a02f93e7edf298 17:01:32 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:7.1.1-SNAPSHOT 17:01:33 Creating simulator ... 17:01:33 Creating compose_zookeeper_1 ... 17:01:33 Creating mariadb ... 17:02:23 Creating compose_zookeeper_1 ... done 17:02:23 Creating kafka ... 17:02:24 Creating mariadb ... done 17:02:24 Creating policy-db-migrator ... 17:02:25 Creating kafka ... done 17:02:25 Creating policy-clamp-ac-http-ppnt ... 17:02:25 Creating policy-clamp-ac-sim-ppnt ... 17:02:25 Creating policy-clamp-ac-k8s-ppnt ... 17:02:26 Creating simulator ... done 17:02:27 Creating policy-db-migrator ... done 17:02:27 Creating policy-api ... 17:02:28 Creating policy-clamp-ac-http-ppnt ... done 17:02:29 Creating policy-clamp-ac-k8s-ppnt ... done 17:02:30 Creating policy-api ... done 17:02:30 Creating policy-pap ... 17:02:30 Creating policy-clamp-ac-pf-ppnt ... 17:02:31 Creating policy-clamp-ac-sim-ppnt ... done 17:02:33 Creating policy-pap ... done 17:02:33 Creating policy-apex-pdp ... 17:02:34 Creating policy-apex-pdp ... done 17:02:35 Creating policy-clamp-ac-pf-ppnt ... done 17:02:35 Creating policy-clamp-runtime-acm ... 17:02:36 Creating policy-clamp-runtime-acm ... done 17:02:36 +++ cd /w/workspace/policy-clamp-master-project-csit-clamp 17:02:36 ++ sleep 10 17:02:46 ++ unset http_proxy https_proxy 17:02:46 ++ /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/wait_for_rest.sh localhost 30007 17:02:46 Waiting for REST to come up on localhost port 30007... 17:02:46 NAMES STATUS 17:02:46 policy-clamp-runtime-acm Up 10 seconds 17:02:46 policy-apex-pdp Up 12 seconds 17:02:46 policy-clamp-ac-pf-ppnt Up 11 seconds 17:02:46 policy-pap Up 13 seconds 17:02:46 policy-api Up 16 seconds 17:02:46 policy-clamp-ac-k8s-ppnt Up 17 seconds 17:02:46 policy-clamp-ac-sim-ppnt Up 15 seconds 17:02:46 policy-clamp-ac-http-ppnt Up 18 seconds 17:02:46 kafka Up 21 seconds 17:02:46 compose_zookeeper_1 Up 23 seconds 17:02:46 simulator Up 20 seconds 17:02:46 mariadb Up 22 seconds 17:02:51 NAMES STATUS 17:02:51 policy-clamp-runtime-acm Up 15 seconds 17:02:51 policy-apex-pdp Up 17 seconds 17:02:51 policy-clamp-ac-pf-ppnt Up 16 seconds 17:02:51 policy-pap Up 18 seconds 17:02:51 policy-api Up 21 seconds 17:02:51 policy-clamp-ac-k8s-ppnt Up 22 seconds 17:02:51 policy-clamp-ac-sim-ppnt Up 20 seconds 17:02:51 policy-clamp-ac-http-ppnt Up 23 seconds 17:02:51 kafka Up 26 seconds 17:02:51 compose_zookeeper_1 Up 28 seconds 17:02:51 simulator Up 25 seconds 17:02:51 mariadb Up 27 seconds 17:02:56 NAMES STATUS 17:02:56 policy-clamp-runtime-acm Up 20 seconds 17:02:56 policy-apex-pdp Up 22 seconds 17:02:56 policy-clamp-ac-pf-ppnt Up 21 seconds 17:02:56 policy-pap Up 23 seconds 17:02:56 policy-api Up 26 seconds 17:02:56 policy-clamp-ac-k8s-ppnt Up 27 seconds 17:02:56 policy-clamp-ac-sim-ppnt Up 25 seconds 17:02:56 policy-clamp-ac-http-ppnt Up 28 seconds 17:02:56 kafka Up 31 seconds 17:02:56 compose_zookeeper_1 Up 33 seconds 17:02:56 simulator Up 30 seconds 17:02:56 mariadb Up 32 seconds 17:03:01 NAMES STATUS 17:03:01 policy-clamp-runtime-acm Up 25 seconds 17:03:01 policy-apex-pdp Up 27 seconds 17:03:01 policy-clamp-ac-pf-ppnt Up 26 seconds 17:03:01 policy-pap Up 28 seconds 17:03:01 policy-api Up 31 seconds 17:03:01 policy-clamp-ac-k8s-ppnt Up 32 seconds 17:03:01 policy-clamp-ac-sim-ppnt Up 30 seconds 17:03:01 policy-clamp-ac-http-ppnt Up 33 seconds 17:03:01 kafka Up 36 seconds 17:03:01 compose_zookeeper_1 Up 38 seconds 17:03:01 simulator Up 35 seconds 17:03:01 mariadb Up 37 seconds 17:03:06 NAMES STATUS 17:03:06 policy-clamp-runtime-acm Up 30 seconds 17:03:06 policy-apex-pdp Up 32 seconds 17:03:06 policy-clamp-ac-pf-ppnt Up 31 seconds 17:03:06 policy-pap Up 33 seconds 17:03:06 policy-api Up 36 seconds 17:03:06 policy-clamp-ac-k8s-ppnt Up 37 seconds 17:03:06 policy-clamp-ac-sim-ppnt Up 35 seconds 17:03:06 policy-clamp-ac-http-ppnt Up 38 seconds 17:03:06 kafka Up 41 seconds 17:03:06 compose_zookeeper_1 Up 43 seconds 17:03:06 simulator Up 40 seconds 17:03:06 mariadb Up 42 seconds 17:03:12 NAMES STATUS 17:03:12 policy-clamp-runtime-acm Up 35 seconds 17:03:12 policy-apex-pdp Up 37 seconds 17:03:12 policy-clamp-ac-pf-ppnt Up 36 seconds 17:03:12 policy-pap Up 39 seconds 17:03:12 policy-api Up 41 seconds 17:03:12 policy-clamp-ac-k8s-ppnt Up 42 seconds 17:03:12 policy-clamp-ac-sim-ppnt Up 40 seconds 17:03:12 policy-clamp-ac-http-ppnt Up 43 seconds 17:03:12 kafka Up 47 seconds 17:03:12 compose_zookeeper_1 Up 49 seconds 17:03:12 simulator Up 45 seconds 17:03:12 mariadb Up 48 seconds 17:03:17 NAMES STATUS 17:03:17 policy-clamp-runtime-acm Up 40 seconds 17:03:17 policy-apex-pdp Up 43 seconds 17:03:17 policy-clamp-ac-pf-ppnt Up 41 seconds 17:03:17 policy-pap Up 44 seconds 17:03:17 policy-api Up 46 seconds 17:03:17 policy-clamp-ac-k8s-ppnt Up 47 seconds 17:03:17 policy-clamp-ac-sim-ppnt Up 45 seconds 17:03:17 policy-clamp-ac-http-ppnt Up 48 seconds 17:03:17 kafka Up 52 seconds 17:03:17 compose_zookeeper_1 Up 54 seconds 17:03:17 simulator Up 50 seconds 17:03:17 mariadb Up 53 seconds 17:03:22 NAMES STATUS 17:03:22 policy-clamp-runtime-acm Up 45 seconds 17:03:22 policy-apex-pdp Up 48 seconds 17:03:22 policy-clamp-ac-pf-ppnt Up 47 seconds 17:03:22 policy-pap Up 49 seconds 17:03:22 policy-api Up 51 seconds 17:03:22 policy-clamp-ac-k8s-ppnt Up 52 seconds 17:03:22 policy-clamp-ac-sim-ppnt Up 50 seconds 17:03:22 policy-clamp-ac-http-ppnt Up 53 seconds 17:03:22 kafka Up 57 seconds 17:03:22 compose_zookeeper_1 Up 59 seconds 17:03:22 simulator Up 56 seconds 17:03:22 mariadb Up 58 seconds 17:03:27 NAMES STATUS 17:03:27 policy-clamp-runtime-acm Up 50 seconds 17:03:27 policy-apex-pdp Up 53 seconds 17:03:27 policy-clamp-ac-pf-ppnt Up 52 seconds 17:03:27 policy-pap Up 54 seconds 17:03:27 policy-api Up 56 seconds 17:03:27 policy-clamp-ac-k8s-ppnt Up 57 seconds 17:03:27 policy-clamp-ac-sim-ppnt Up 55 seconds 17:03:27 policy-clamp-ac-http-ppnt Up 58 seconds 17:03:27 kafka Up About a minute 17:03:27 compose_zookeeper_1 Up About a minute 17:03:27 simulator Up About a minute 17:03:27 mariadb Up About a minute 17:03:32 NAMES STATUS 17:03:32 policy-clamp-runtime-acm Up 55 seconds 17:03:32 policy-apex-pdp Up 58 seconds 17:03:32 policy-clamp-ac-pf-ppnt Up 57 seconds 17:03:32 policy-pap Up 59 seconds 17:03:32 policy-api Up About a minute 17:03:32 policy-clamp-ac-k8s-ppnt Up About a minute 17:03:32 policy-clamp-ac-sim-ppnt Up About a minute 17:03:32 policy-clamp-ac-http-ppnt Up About a minute 17:03:32 kafka Up About a minute 17:03:32 compose_zookeeper_1 Up About a minute 17:03:32 simulator Up About a minute 17:03:32 mariadb Up About a minute 17:03:37 NAMES STATUS 17:03:37 policy-clamp-runtime-acm Up About a minute 17:03:37 policy-apex-pdp Up About a minute 17:03:37 policy-clamp-ac-pf-ppnt Up About a minute 17:03:37 policy-pap Up About a minute 17:03:37 policy-api Up About a minute 17:03:37 policy-clamp-ac-k8s-ppnt Up About a minute 17:03:37 policy-clamp-ac-sim-ppnt Up About a minute 17:03:37 policy-clamp-ac-http-ppnt Up About a minute 17:03:37 kafka Up About a minute 17:03:37 compose_zookeeper_1 Up About a minute 17:03:37 simulator Up About a minute 17:03:37 mariadb Up About a minute 17:03:42 NAMES STATUS 17:03:42 policy-clamp-runtime-acm Up About a minute 17:03:42 policy-apex-pdp Up About a minute 17:03:42 policy-clamp-ac-pf-ppnt Up About a minute 17:03:42 policy-pap Up About a minute 17:03:42 policy-api Up About a minute 17:03:42 policy-clamp-ac-k8s-ppnt Up About a minute 17:03:42 policy-clamp-ac-sim-ppnt Up About a minute 17:03:42 policy-clamp-ac-http-ppnt Up About a minute 17:03:42 kafka Up About a minute 17:03:42 compose_zookeeper_1 Up About a minute 17:03:42 simulator Up About a minute 17:03:42 mariadb Up About a minute 17:03:42 ++ CLAMP_K8S_TEST=false 17:03:42 ++ export SUITES=policy-clamp-test.robot 17:03:42 ++ SUITES=policy-clamp-test.robot 17:03:42 ++ ROBOT_VARIABLES='-v POLICY_RUNTIME_ACM_IP:localhost:30007 17:03:42 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v CLAMP_K8S_TEST:false' 17:03:42 + load_set 17:03:42 + _setopts=hxB 17:03:42 ++ echo braceexpand:hashall:interactive-comments:xtrace 17:03:42 ++ tr : ' ' 17:03:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:03:42 + set +o braceexpand 17:03:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:03:42 + set +o hashall 17:03:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:03:42 + set +o interactive-comments 17:03:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:03:42 + set +o xtrace 17:03:42 ++ echo hxB 17:03:42 ++ sed 's/./& /g' 17:03:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 17:03:42 + set +h 17:03:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 17:03:42 + set +x 17:03:42 + docker_stats 17:03:42 ++ uname -s 17:03:42 + tee /w/workspace/policy-clamp-master-project-csit-clamp/csit/archives/clamp/_sysinfo-1-after-setup.txt 17:03:42 + '[' Linux == Darwin ']' 17:03:42 + sh -c 'top -bn1 | head -3' 17:03:42 top - 17:03:42 up 7 min, 0 users, load average: 5.76, 3.09, 1.29 17:03:42 Tasks: 227 total, 1 running, 153 sleeping, 0 stopped, 0 zombie 17:03:42 %Cpu(s): 15.3 us, 2.6 sy, 0.0 ni, 73.6 id, 8.3 wa, 0.0 hi, 0.1 si, 0.1 st 17:03:42 + echo 17:03:42 17:03:42 + sh -c 'free -h' 17:03:42 total used free shared buff/cache available 17:03:42 Mem: 31G 4.6G 20G 1.5M 6.0G 26G 17:03:42 Swap: 1.0G 0B 1.0G 17:03:42 + echo 17:03:42 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 17:03:42 17:03:42 NAMES STATUS 17:03:42 policy-clamp-runtime-acm Up About a minute 17:03:42 policy-apex-pdp Up About a minute 17:03:42 policy-clamp-ac-pf-ppnt Up About a minute 17:03:42 policy-pap Up About a minute 17:03:42 policy-api Up About a minute 17:03:42 policy-clamp-ac-k8s-ppnt Up About a minute 17:03:42 policy-clamp-ac-sim-ppnt Up About a minute 17:03:42 policy-clamp-ac-http-ppnt Up About a minute 17:03:42 kafka Up About a minute 17:03:42 compose_zookeeper_1 Up About a minute 17:03:42 simulator Up About a minute 17:03:42 mariadb Up About a minute 17:03:42 + echo 17:03:42 + docker stats --no-stream 17:03:42 17:03:45 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 17:03:45 c61731c9c243 policy-clamp-runtime-acm 12.32% 622.1MiB / 31.41GiB 1.93% 18.8kB / 24.8kB 0B / 0B 63 17:03:45 213b00e7c260 policy-apex-pdp 0.97% 181.3MiB / 31.41GiB 0.56% 16kB / 16.8kB 0B / 0B 49 17:03:45 f383e9ab71d8 policy-clamp-ac-pf-ppnt 1.28% 340.2MiB / 31.41GiB 1.06% 18.9kB / 20.5kB 0B / 0B 59 17:03:45 e78d508c31c9 policy-pap 3.56% 486MiB / 31.41GiB 1.51% 41.7kB / 48.3kB 0B / 153MB 62 17:03:45 0b2e518b5b17 policy-api 0.23% 457.9MiB / 31.41GiB 1.42% 1.01MB / 714kB 0B / 0B 53 17:03:45 79ce0779cdba policy-clamp-ac-k8s-ppnt 1.22% 413MiB / 31.41GiB 1.28% 25.4kB / 29.3kB 0B / 0B 61 17:03:45 494a1beef045 policy-clamp-ac-sim-ppnt 1.03% 353.7MiB / 31.41GiB 1.10% 31.7kB / 38.4kB 0B / 0B 61 17:03:45 ab79f0f548cb policy-clamp-ac-http-ppnt 1.62% 303.6MiB / 31.41GiB 0.94% 26.7kB / 30.3kB 0B / 0B 60 17:03:45 d35b1e98ac95 kafka 8.25% 387.2MiB / 31.41GiB 1.20% 202kB / 182kB 0B / 557kB 85 17:03:45 04f691a4a685 compose_zookeeper_1 0.28% 103.2MiB / 31.41GiB 0.32% 58kB / 50.2kB 229kB / 483kB 60 17:03:45 151761c6280e simulator 0.11% 120.9MiB / 31.41GiB 0.38% 1.72kB / 0B 0B / 0B 77 17:03:45 362d9fdeb62c mariadb 0.02% 102.9MiB / 31.41GiB 0.32% 1.02MB / 1.21MB 10.8MB / 65.3MB 42 17:03:45 + echo 17:03:45 17:03:45 + cd /tmp/tmp.0Q219FRrYS 17:03:45 + echo 'Reading the testplan:' 17:03:45 Reading the testplan: 17:03:45 + echo policy-clamp-test.robot 17:03:45 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 17:03:45 + sed 's|^|/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/|' 17:03:45 + cat testplan.txt 17:03:45 /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot 17:03:45 ++ xargs 17:03:45 + SUITES=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot 17:03:45 + echo 'ROBOT_VARIABLES=-v POLICY_RUNTIME_ACM_IP:localhost:30007 17:03:45 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v CLAMP_K8S_TEST:false' 17:03:45 ROBOT_VARIABLES=-v POLICY_RUNTIME_ACM_IP:localhost:30007 17:03:45 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v CLAMP_K8S_TEST:false 17:03:45 + echo 'Starting Robot test suites /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot ...' 17:03:45 Starting Robot test suites /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot ... 17:03:45 + relax_set 17:03:45 + set +e 17:03:45 + set +o pipefail 17:03:45 + python3 -m robot.run -N clamp -v WORKSPACE:/tmp -v POLICY_RUNTIME_ACM_IP:localhost:30007 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v CLAMP_K8S_TEST:false /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot 17:03:45 ============================================================================== 17:03:45 clamp 17:03:45 ============================================================================== 17:03:46 Healthcheck :: Healthcheck on Clamp Acm | PASS | 17:03:46 ------------------------------------------------------------------------------ 17:03:47 CommissionAutomationComposition :: Commission automation composition. | PASS | 17:03:47 ------------------------------------------------------------------------------ 17:03:47 RegisterParticipants :: Register Participants. | PASS | 17:03:47 ------------------------------------------------------------------------------ 17:03:53 PrimeACDefinitions :: Prime automation composition definition | PASS | 17:03:53 ------------------------------------------------------------------------------ 17:03:53 InstantiateAutomationComposition :: Instantiate automation composi... | PASS | 17:03:53 ------------------------------------------------------------------------------ 17:03:58 DeployAutomationComposition :: Deploy automation composition. | PASS | 17:03:58 ------------------------------------------------------------------------------ 17:04:09 QueryPolicies :: Verify the new policies deployed | PASS | 17:04:09 ------------------------------------------------------------------------------ 17:04:21 QueryPolicyTypes :: Verify the new policy types created | PASS | 17:04:21 ------------------------------------------------------------------------------ 17:04:26 UnDeployAutomationComposition :: UnDeploy automation composition. | PASS | 17:04:26 ------------------------------------------------------------------------------ 17:04:26 UnInstantiateAutomationComposition :: Delete automation compositio... | PASS | 17:04:26 ------------------------------------------------------------------------------ 17:04:32 DePrimeACDefinitions :: DePrime automation composition definition | PASS | 17:04:32 ------------------------------------------------------------------------------ 17:04:32 DeleteACDefinition :: Delete automation composition definition. | PASS | 17:04:32 ------------------------------------------------------------------------------ 17:04:32 clamp | PASS | 17:04:32 12 tests, 12 passed, 0 failed 17:04:32 ============================================================================== 17:04:32 Output: /tmp/tmp.0Q219FRrYS/output.xml 17:04:32 Log: /tmp/tmp.0Q219FRrYS/log.html 17:04:32 Report: /tmp/tmp.0Q219FRrYS/report.html 17:04:32 + RESULT=0 17:04:32 + load_set 17:04:32 + _setopts=hxB 17:04:32 ++ echo braceexpand:hashall:interactive-comments:xtrace 17:04:32 ++ tr : ' ' 17:04:32 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:04:32 + set +o braceexpand 17:04:32 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:04:32 + set +o hashall 17:04:32 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:04:32 + set +o interactive-comments 17:04:32 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:04:32 + set +o xtrace 17:04:32 ++ echo hxB 17:04:32 ++ sed 's/./& /g' 17:04:32 + for i in $(echo "$_setopts" | sed 's/./& /g') 17:04:32 + set +h 17:04:32 + for i in $(echo "$_setopts" | sed 's/./& /g') 17:04:32 + set +x 17:04:32 + echo 'RESULT: 0' 17:04:32 RESULT: 0 17:04:32 + exit 0 17:04:32 + on_exit 17:04:32 + rc=0 17:04:32 + [[ -n /w/workspace/policy-clamp-master-project-csit-clamp ]] 17:04:32 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 17:04:32 NAMES STATUS 17:04:32 policy-clamp-runtime-acm Up About a minute 17:04:32 policy-apex-pdp Up About a minute 17:04:32 policy-clamp-ac-pf-ppnt Up About a minute 17:04:32 policy-pap Up About a minute 17:04:32 policy-api Up 2 minutes 17:04:32 policy-clamp-ac-k8s-ppnt Up 2 minutes 17:04:32 policy-clamp-ac-sim-ppnt Up 2 minutes 17:04:32 policy-clamp-ac-http-ppnt Up 2 minutes 17:04:32 kafka Up 2 minutes 17:04:32 compose_zookeeper_1 Up 2 minutes 17:04:32 simulator Up 2 minutes 17:04:32 mariadb Up 2 minutes 17:04:32 + docker_stats 17:04:32 ++ uname -s 17:04:32 + '[' Linux == Darwin ']' 17:04:32 + sh -c 'top -bn1 | head -3' 17:04:32 top - 17:04:32 up 7 min, 0 users, load average: 3.31, 2.88, 1.32 17:04:32 Tasks: 225 total, 1 running, 151 sleeping, 0 stopped, 0 zombie 17:04:32 %Cpu(s): 15.3 us, 2.5 sy, 0.0 ni, 74.4 id, 7.7 wa, 0.0 hi, 0.1 si, 0.1 st 17:04:32 + echo 17:04:32 17:04:32 + sh -c 'free -h' 17:04:32 total used free shared buff/cache available 17:04:32 Mem: 31G 5.1G 20G 1.5M 6.0G 25G 17:04:32 Swap: 1.0G 0B 1.0G 17:04:32 + echo 17:04:32 17:04:32 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 17:04:32 NAMES STATUS 17:04:32 policy-clamp-runtime-acm Up About a minute 17:04:32 policy-apex-pdp Up About a minute 17:04:32 policy-clamp-ac-pf-ppnt Up About a minute 17:04:32 policy-pap Up About a minute 17:04:32 policy-api Up 2 minutes 17:04:32 policy-clamp-ac-k8s-ppnt Up 2 minutes 17:04:32 policy-clamp-ac-sim-ppnt Up 2 minutes 17:04:32 policy-clamp-ac-http-ppnt Up 2 minutes 17:04:32 kafka Up 2 minutes 17:04:32 compose_zookeeper_1 Up 2 minutes 17:04:32 simulator Up 2 minutes 17:04:32 mariadb Up 2 minutes 17:04:32 + echo 17:04:32 17:04:32 + docker stats --no-stream 17:04:35 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 17:04:35 c61731c9c243 policy-clamp-runtime-acm 0.67% 639.5MiB / 31.41GiB 1.99% 7.23MB / 1.23MB 0B / 0B 69 17:04:35 213b00e7c260 policy-apex-pdp 0.67% 233.2MiB / 31.41GiB 0.73% 74.8kB / 69.7kB 0B / 16.4kB 57 17:04:35 f383e9ab71d8 policy-clamp-ac-pf-ppnt 1.33% 419.9MiB / 31.41GiB 1.31% 171kB / 126kB 0B / 0B 65 17:04:35 e78d508c31c9 policy-pap 1.46% 497.7MiB / 31.41GiB 1.55% 1.72MB / 505kB 0B / 153MB 63 17:04:35 0b2e518b5b17 policy-api 0.14% 865.8MiB / 31.41GiB 2.69% 2.58MB / 1.27MB 0B / 0B 55 17:04:35 79ce0779cdba policy-clamp-ac-k8s-ppnt 0.41% 414.9MiB / 31.41GiB 1.29% 96.6kB / 67.5kB 0B / 0B 64 17:04:35 494a1beef045 policy-clamp-ac-sim-ppnt 0.36% 352.7MiB / 31.41GiB 1.10% 103kB / 75.6kB 0B / 0B 62 17:04:35 ab79f0f548cb policy-clamp-ac-http-ppnt 1.01% 304.4MiB / 31.41GiB 0.95% 98.2kB / 69.1kB 0B / 0B 63 17:04:35 d35b1e98ac95 kafka 6.51% 393.6MiB / 31.41GiB 1.22% 567kB / 695kB 0B / 623kB 85 17:04:35 04f691a4a685 compose_zookeeper_1 0.30% 105.2MiB / 31.41GiB 0.33% 63.3kB / 54.8kB 229kB / 532kB 60 17:04:35 151761c6280e simulator 0.12% 121MiB / 31.41GiB 0.38% 1.85kB / 0B 0B / 0B 78 17:04:35 362d9fdeb62c mariadb 0.04% 105.1MiB / 31.41GiB 0.33% 2.74MB / 11.4MB 10.8MB / 65.8MB 43 17:04:35 + echo 17:04:35 17:04:35 + source_safely /w/workspace/policy-clamp-master-project-csit-clamp/compose/stop-compose.sh 17:04:35 + '[' -z /w/workspace/policy-clamp-master-project-csit-clamp/compose/stop-compose.sh ']' 17:04:35 + relax_set 17:04:35 + set +e 17:04:35 + set +o pipefail 17:04:35 + . /w/workspace/policy-clamp-master-project-csit-clamp/compose/stop-compose.sh 17:04:35 ++ echo 'Shut down started!' 17:04:35 Shut down started! 17:04:35 ++ '[' -z /w/workspace/policy-clamp-master-project-csit-clamp ']' 17:04:35 ++ COMPOSE_FOLDER=/w/workspace/policy-clamp-master-project-csit-clamp/compose 17:04:35 ++ cd /w/workspace/policy-clamp-master-project-csit-clamp/compose 17:04:35 ++ source export-ports.sh 17:04:35 ++ source get-versions.sh 17:04:37 ++ echo 'Collecting logs from docker compose containers...' 17:04:37 Collecting logs from docker compose containers... 17:04:37 ++ docker-compose logs 17:04:40 ++ cat docker_compose.log 17:04:40 Attaching to policy-clamp-runtime-acm, policy-apex-pdp, policy-clamp-ac-pf-ppnt, policy-pap, policy-api, policy-clamp-ac-k8s-ppnt, policy-clamp-ac-sim-ppnt, policy-clamp-ac-http-ppnt, policy-db-migrator, kafka, compose_zookeeper_1, simulator, mariadb 17:04:40 mariadb | 2024-02-16 17:02:24+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 17:04:40 mariadb | 2024-02-16 17:02:24+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 17:04:40 mariadb | 2024-02-16 17:02:24+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 17:04:40 mariadb | 2024-02-16 17:02:24+00:00 [Note] [Entrypoint]: Initializing database files 17:04:40 mariadb | 2024-02-16 17:02:24 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:04:40 mariadb | 2024-02-16 17:02:24 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:04:40 mariadb | 2024-02-16 17:02:24 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:04:40 mariadb | 17:04:40 mariadb | 17:04:40 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 17:04:40 mariadb | To do so, start the server, then issue the following command: 17:04:40 mariadb | 17:04:40 mariadb | '/usr/bin/mysql_secure_installation' 17:04:40 mariadb | 17:04:40 mariadb | which will also give you the option of removing the test 17:04:40 mariadb | databases and anonymous user created by default. This is 17:04:40 mariadb | strongly recommended for production servers. 17:04:40 mariadb | 17:04:40 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 17:04:40 mariadb | 17:04:40 mariadb | Please report any problems at https://mariadb.org/jira 17:04:40 mariadb | 17:04:40 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 17:04:40 mariadb | 17:04:40 mariadb | Consider joining MariaDB's strong and vibrant community: 17:04:40 mariadb | https://mariadb.org/get-involved/ 17:04:40 mariadb | 17:04:40 mariadb | 2024-02-16 17:02:27+00:00 [Note] [Entrypoint]: Database files initialized 17:04:40 mariadb | 2024-02-16 17:02:27+00:00 [Note] [Entrypoint]: Starting temporary server 17:04:40 mariadb | 2024-02-16 17:02:27+00:00 [Note] [Entrypoint]: Waiting for server startup 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: Number of transaction pools: 1 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: Completed initialization of buffer pool 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: 128 rollback segments are active. 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] InnoDB: log sequence number 46456; transaction id 14 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] Plugin 'FEEDBACK' is disabled. 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 17:04:40 mariadb | 2024-02-16 17:02:28 0 [Note] mariadbd: ready for connections. 17:04:40 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 17:04:40 mariadb | 2024-02-16 17:02:29+00:00 [Note] [Entrypoint]: Temporary server started. 17:04:40 mariadb | 2024-02-16 17:02:31+00:00 [Note] [Entrypoint]: Creating user policy_user 17:04:40 mariadb | 2024-02-16 17:02:31+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 17:04:40 mariadb | 17:04:40 mariadb | 17:04:40 mariadb | 2024-02-16 17:02:32+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 17:04:40 mariadb | 2024-02-16 17:02:32+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 17:04:40 mariadb | #!/bin/bash -xv 17:04:40 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 17:04:40 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 17:04:40 mariadb | # 17:04:40 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 17:04:40 mariadb | # you may not use this file except in compliance with the License. 17:04:40 zookeeper_1 | ===> User 17:04:40 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 17:04:40 zookeeper_1 | ===> Configuring ... 17:04:40 zookeeper_1 | ===> Running preflight checks ... 17:04:40 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 17:04:40 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 17:04:40 zookeeper_1 | ===> Launching ... 17:04:40 zookeeper_1 | ===> Launching zookeeper ... 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,135] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,145] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,145] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,146] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,146] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,148] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,148] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,148] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,148] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,150] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,150] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,151] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,151] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,151] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,151] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,151] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,167] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,170] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,170] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,174] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,186] INFO (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 mariadb | # You may obtain a copy of the License at 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 mariadb | # 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 17:04:40 kafka | ===> User 17:04:40 policy-apex-pdp | Waiting for mariadb port 3306... 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,187] INFO (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 mariadb | # 17:04:40 policy-clamp-ac-k8s-ppnt | Waiting for kafka port 9092... 17:04:40 policy-api | Waiting for mariadb port 3306... 17:04:40 policy-clamp-ac-sim-ppnt | Waiting for kafka port 9092... 17:04:40 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 17:04:40 policy-apex-pdp | mariadb (172.17.0.4:3306) open 17:04:40 policy-clamp-ac-pf-ppnt | Waiting for kafka port 9092... 17:04:40 policy-clamp-ac-http-ppnt | Waiting for kafka port 9092... 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 mariadb | # Unless required by applicable law or agreed to in writing, software 17:04:40 policy-db-migrator | Waiting for mariadb port 3306... 17:04:40 policy-clamp-ac-k8s-ppnt | kafka (172.17.0.5:9092) open 17:04:40 policy-api | Waiting for policy-db-migrator port 6824... 17:04:40 policy-clamp-ac-sim-ppnt | kafka (172.17.0.5:9092) open 17:04:40 policy-clamp-runtime-acm | Waiting for mariadb port 3306... 17:04:40 policy-pap | Waiting for mariadb port 3306... 17:04:40 kafka | ===> Configuring ... 17:04:40 policy-apex-pdp | Waiting for kafka port 9092... 17:04:40 policy-clamp-ac-pf-ppnt | kafka (172.17.0.5:9092) open 17:04:40 policy-clamp-ac-http-ppnt | kafka (172.17.0.5:9092) open 17:04:40 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:host.name=04f691a4a685 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 17:04:40 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 17:04:40 policy-clamp-ac-k8s-ppnt | Policy clamp Kubernetes participant config file: /opt/app/policy/clamp/etc/KubernetesParticipantParameters.yaml 17:04:40 policy-api | mariadb (172.17.0.4:3306) open 17:04:40 policy-clamp-ac-sim-ppnt | Policy clamp Simulator participant config file: /opt/app/policy/clamp/etc/SimulatorParticipantParameters.yaml 17:04:40 policy-clamp-runtime-acm | mariadb (172.17.0.4:3306) open 17:04:40 policy-pap | mariadb (172.17.0.4:3306) open 17:04:40 kafka | Running in Zookeeper mode... 17:04:40 policy-apex-pdp | kafka (172.17.0.5:9092) open 17:04:40 policy-clamp-ac-pf-ppnt | Waiting for api port 6969... 17:04:40 policy-clamp-ac-http-ppnt | Policy clamp HTTP participant config file: /opt/app/policy/clamp/etc/HttpParticipantParameters.yaml 17:04:40 simulator | overriding logback.xml 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 17:04:40 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 17:04:40 policy-clamp-ac-k8s-ppnt | 17:04:40 policy-api | policy-db-migrator (172.17.0.6:6824) open 17:04:40 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 17:04:40 policy-clamp-runtime-acm | Waiting for kafka port 9092... 17:04:40 policy-pap | Waiting for kafka port 9092... 17:04:40 kafka | ===> Running preflight checks ... 17:04:40 kafka | ===> Check if /var/lib/kafka/data is writable ... 17:04:40 policy-clamp-ac-pf-ppnt | api (172.17.0.10:6969) open 17:04:40 policy-clamp-ac-http-ppnt | 17:04:40 simulator | 2024-02-16 17:02:26,964 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 mariadb | # See the License for the specific language governing permissions and 17:04:40 policy-clamp-ac-k8s-ppnt | . ____ _ __ _ _ 17:04:40 policy-api | 17:04:40 policy-clamp-ac-sim-ppnt | 17:04:40 policy-clamp-ac-sim-ppnt | . ____ _ __ _ _ 17:04:40 policy-pap | kafka (172.17.0.5:9092) open 17:04:40 kafka | ===> Check if Zookeeper is healthy ... 17:04:40 policy-apex-pdp | Waiting for pap port 6969... 17:04:40 policy-clamp-ac-pf-ppnt | Policy clamp policy participant config file: /opt/app/policy/clamp/etc/PolicyParticipantParameters.yaml 17:04:40 policy-clamp-ac-http-ppnt | . ____ _ __ _ _ 17:04:40 simulator | 2024-02-16 17:02:27,029 INFO org.onap.policy.models.simulators starting 17:04:40 mariadb | # limitations under the License. 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | . ____ _ __ _ _ 17:04:40 policy-clamp-runtime-acm | kafka (172.17.0.5:9092) open 17:04:40 policy-clamp-ac-sim-ppnt | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:40 policy-pap | Waiting for api port 6969... 17:04:40 kafka | SLF4J: Class path contains multiple SLF4J bindings. 17:04:40 policy-apex-pdp | pap (172.17.0.11:6969) open 17:04:40 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 17:04:40 policy-clamp-ac-pf-ppnt | 17:04:40 policy-clamp-ac-http-ppnt | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:40 simulator | 2024-02-16 17:02:27,030 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 17:04:40 mariadb | 17:04:40 policy-clamp-ac-k8s-ppnt | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:40 policy-clamp-runtime-acm | Waiting for policy-clamp-ac-http-ppnt port 6969... 17:04:40 policy-clamp-ac-sim-ppnt | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:40 policy-pap | api (172.17.0.10:6969) open 17:04:40 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.025+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.189+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-clamp-ac-pf-ppnt | . ____ _ __ _ _ 17:04:40 policy-clamp-ac-http-ppnt | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:40 simulator | 2024-02-16 17:02:27,384 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 17:04:40 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:40 policy-clamp-ac-k8s-ppnt | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:40 policy-clamp-runtime-acm | policy-clamp-ac-http-ppnt (172.17.0.7:6969) open 17:04:40 policy-clamp-ac-sim-ppnt | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:40 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 17:04:40 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 17:04:40 policy-apex-pdp | allow.auto.create.topics = true 17:04:40 policy-apex-pdp | auto.commit.interval.ms = 5000 17:04:40 policy-clamp-ac-pf-ppnt | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:40 policy-clamp-ac-http-ppnt | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:40 simulator | 2024-02-16 17:02:27,385 INFO org.onap.policy.models.simulators starting A&AI simulator 17:04:40 mariadb | do 17:04:40 policy-clamp-ac-k8s-ppnt | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:40 policy-clamp-runtime-acm | Waiting for policy-clamp-ac-k8s-ppnt port 6969... 17:04:40 policy-clamp-ac-sim-ppnt | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:40 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 17:04:40 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 17:04:40 policy-apex-pdp | auto.include.jmx.reporter = true 17:04:40 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 17:04:40 policy-clamp-ac-pf-ppnt | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:40 policy-clamp-ac-http-ppnt | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:40 simulator | 2024-02-16 17:02:27,569 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:04:40 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 17:04:40 policy-clamp-ac-k8s-ppnt | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,188] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:40 policy-clamp-runtime-acm | policy-clamp-ac-k8s-ppnt (172.17.0.8:6969) open 17:04:40 policy-clamp-ac-sim-ppnt | =========|_|==============|___/=/_/_/_/ 17:04:40 policy-pap | 17:04:40 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 17:04:40 policy-apex-pdp | auto.offset.reset = latest 17:04:40 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 17:04:40 policy-clamp-ac-pf-ppnt | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:40 policy-clamp-ac-http-ppnt | =========|_|==============|___/=/_/_/_/ 17:04:40 simulator | 2024-02-16 17:02:27,580 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 17:04:40 policy-clamp-ac-k8s-ppnt | =========|_|==============|___/=/_/_/_/ 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | =========|_|==============|___/=/_/_/_/ 17:04:40 policy-clamp-runtime-acm | Waiting for policy-clamp-ac-pf-ppnt port 6969... 17:04:40 policy-clamp-ac-sim-ppnt | :: Spring Boot :: (v3.1.8) 17:04:40 policy-pap | . ____ _ __ _ _ 17:04:40 kafka | [2024-02-16 17:02:29,660] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:04:40 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 17:04:40 policy-clamp-ac-pf-ppnt | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:40 policy-clamp-ac-http-ppnt | :: Spring Boot :: (v3.1.8) 17:04:40 simulator | 2024-02-16 17:02:27,583 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 mariadb | done 17:04:40 policy-clamp-ac-k8s-ppnt | :: Spring Boot :: (v3.1.8) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | :: Spring Boot :: (v3.1.8) 17:04:40 policy-clamp-runtime-acm | policy-clamp-ac-pf-ppnt (172.17.0.12:6969) open 17:04:40 policy-clamp-ac-sim-ppnt | 17:04:40 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:40 kafka | [2024-02-16 17:02:29,661] INFO Client environment:host.name=d35b1e98ac95 (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-apex-pdp | check.crcs = true 17:04:40 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 17:04:40 policy-clamp-ac-pf-ppnt | =========|_|==============|___/=/_/_/_/ 17:04:40 policy-clamp-ac-http-ppnt | 17:04:40 simulator | 2024-02-16 17:02:27,588 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 17:04:40 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:40 policy-clamp-ac-k8s-ppnt | 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | 17:04:40 policy-clamp-runtime-acm | Waiting for apex-pdp port 6969... 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:35.925+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:04:40 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:40 kafka | [2024-02-16 17:02:29,661] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:04:40 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 17:04:40 policy-clamp-ac-pf-ppnt | :: Spring Boot :: (v3.1.8) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:37.103+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:04:40 simulator | 2024-02-16 17:02:27,650 INFO Session workerName=node0 17:04:40 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:02:37.525+00:00|INFO|Application|main] Starting Application using Java 17.0.10 with PID 14 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:48.695+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 24 (/app/api.jar started by policy in /opt/app/policy/api/bin) 17:04:40 policy-clamp-runtime-acm | apex-pdp (172.17.0.13:6969) open 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:36.139+00:00|INFO|Application|main] Starting Application using Java 17.0.10 with PID 11 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) 17:04:40 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:40 kafka | [2024-02-16 17:02:29,661] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-apex-pdp | client.id = consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-1 17:04:40 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 17:04:40 policy-clamp-ac-pf-ppnt | 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:37.321+00:00|INFO|Application|main] Starting Application using Java 17.0.10 with PID 15 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) 17:04:40 simulator | 2024-02-16 17:02:28,203 INFO Using GSON for REST calls 17:04:40 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:02:37.542+00:00|INFO|Application|main] No active profile set, falling back to 1 default profile: "default" 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:48.715+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 17:04:40 policy-clamp-runtime-acm | Policy clamp runtime acm config file: /opt/app/policy/clamp/etc/AcRuntimeParameters.yaml 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:36.140+00:00|INFO|Application|main] No active profile set, falling back to 1 default profile: "default" 17:04:40 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:40 kafka | [2024-02-16 17:02:29,661] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-apex-pdp | client.rack = 17:04:40 policy-db-migrator | 321 blocks 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:10.055+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:37.326+00:00|INFO|Application|main] No active profile set, falling back to 1 default profile: "default" 17:04:40 simulator | 2024-02-16 17:02:28,281 INFO Started o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE} 17:04:40 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:02:53.408+00:00|INFO|network|main] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:53.909+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:04:40 policy-clamp-runtime-acm | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:41.282+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:04:40 policy-pap | =========|_|==============|___/=/_/_/_/ 17:04:40 policy-apex-pdp | connections.max.idle.ms = 540000 17:04:40 kafka | [2024-02-16 17:02:29,661] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-db-migrator | Preparing upgrade release version: 0800 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:10.130+00:00|INFO|PolicyParticipantApplication|main] Starting PolicyParticipantApplication using Java 17.0.10 with PID 43 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:43.801+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:04:40 simulator | 2024-02-16 17:02:28,288 INFO Started A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 17:04:40 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 17:04:40 policy-clamp-ac-k8s-ppnt | {"participantSupportedElementType":[{"id":"c5bce600-be93-48d6-9321-677a64168aee","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"b1c974b7-fe9f-4dcf-8062-bfa33efdbeb7","timestamp":"2024-02-16T17:02:53.366640521Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:54.094+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 169 ms. Found 6 JPA repository interfaces. 17:04:40 policy-clamp-runtime-acm | . ____ _ __ _ _ 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:41.294+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:04:40 policy-apex-pdp | default.api.timeout.ms = 60000 17:04:40 policy-apex-pdp | enable.auto.commit = true 17:04:40 kafka | [2024-02-16 17:02:29,661] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-db-migrator | Preparing upgrade release version: 0900 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:10.132+00:00|INFO|PolicyParticipantApplication|main] No active profile set, falling back to 1 default profile: "default" 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:43.822+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:04:40 simulator | 2024-02-16 17:02:28,294 INFO Started Server@2a2c13a8{STARTING}[11.0.20,sto=0] @1992ms 17:04:40 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:02:54.257+00:00|INFO|Application|main] Started Application in 18.143 seconds (process running for 19.797) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:54.792+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 17:04:40 policy-clamp-runtime-acm | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:41.296+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:04:40 policy-apex-pdp | exclude.internal.topics = true 17:04:40 policy-apex-pdp | fetch.max.bytes = 52428800 17:04:40 kafka | [2024-02-16 17:02:29,661] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-db-migrator | Preparing upgrade release version: 1000 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:13.801+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:43.829+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:04:40 simulator | 2024-02-16 17:02:28,294 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4289 ms. 17:04:40 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:19.195+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:54.792+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 17:04:40 policy-clamp-runtime-acm | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:41.296+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 17:04:40 policy-apex-pdp | fetch.max.wait.ms = 500 17:04:40 policy-apex-pdp | fetch.min.bytes = 1 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-db-migrator | Preparing upgrade release version: 1100 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:13.810+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:43.830+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 17:04:40 simulator | 2024-02-16 17:02:28,299 INFO org.onap.policy.models.simulators starting SDNC simulator 17:04:40 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 17:04:40 policy-clamp-ac-k8s-ppnt | {"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"b87f9a17-6d57-4360-84d0-97780fa59145","timestamp":"2024-02-16T17:03:18.614708531Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:55.514+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:04:40 policy-clamp-runtime-acm | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:40 policy-clamp-runtime-acm | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:40 policy-apex-pdp | group.id = 02b3ddfc-6c0d-4750-8519-6e56d3cb3479 17:04:40 policy-apex-pdp | group.instance.id = null 17:04:40 policy-db-migrator | Preparing upgrade release version: 1200 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:44.017+00:00|INFO|[/onap/policy/clamp/acm/httpparticipant]|main] Initializing Spring embedded WebApplicationContext 17:04:40 simulator | 2024-02-16 17:02:28,301 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:04:40 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.204+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:55.528+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:04:40 policy-apex-pdp | heartbeat.interval.ms = 3000 17:04:40 policy-apex-pdp | interceptor.classes = [] 17:04:40 policy-apex-pdp | internal.leave.group.on.close = true 17:04:40 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:13.812+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:04:40 policy-db-migrator | Preparing upgrade release version: 1300 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:44.018+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 6514 ms 17:04:40 simulator | 2024-02-16 17:02:28,301 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:40 policy-clamp-ac-k8s-ppnt | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"751ed6c0-ea59-4009-9448-d8d54dacf1ac","timestamp":"2024-02-16T17:03:47.116528928Z"} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:55.530+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:04:40 policy-apex-pdp | isolation.level = read_uncommitted 17:04:40 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:04:40 policy-apex-pdp | max.poll.interval.ms = 300000 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:13.813+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 17:04:40 policy-db-migrator | Done 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:47.611+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 simulator | 2024-02-16 17:02:28,303 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.252+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:55.530+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 17:04:40 policy-apex-pdp | max.poll.records = 500 17:04:40 policy-apex-pdp | metadata.max.age.ms = 300000 17:04:40 policy-apex-pdp | metric.reporters = [] 17:04:40 policy-apex-pdp | metrics.num.samples = 2 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:13.974+00:00|INFO|[/onap/policy/clamp/acm/policyparticipant]|main] Initializing Spring embedded WebApplicationContext 17:04:40 policy-db-migrator | name version 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | allow.auto.create.topics = true 17:04:40 simulator | 2024-02-16 17:02:28,304 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 17:04:40 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:40 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"c5bce600-be93-48d6-9321-677a64168aee","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"757abd77-5d53-4c4b-b040-1abc1768cd48","timestamp":"2024-02-16T17:03:47.217554434Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,189] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:55.645+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 17:04:40 policy-apex-pdp | metrics.recording.level = INFO 17:04:40 policy-apex-pdp | metrics.sample.window.ms = 30000 17:04:40 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-apex-pdp | receive.buffer.bytes = 65536 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:13.975+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3748 ms 17:04:40 policy-db-migrator | policyadmin 0 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | auto.commit.interval.ms = 5000 17:04:40 simulator | 2024-02-16 17:02:28,317 INFO Session workerName=node0 17:04:40 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.262+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,190] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 17:04:40 policy-api | [2024-02-16T17:02:55.646+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 6532 ms 17:04:40 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:41.554+00:00|INFO|[/onap/policy/clamp/acm/simparticipant]|main] Initializing Spring embedded WebApplicationContext 17:04:40 policy-pap | :: Spring Boot :: (v3.1.8) 17:04:40 policy-pap | 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:15.836+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | auto.include.jmx.reporter = true 17:04:40 simulator | 2024-02-16 17:02:28,373 INFO Using GSON for REST calls 17:04:40 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 17:04:40 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"939773c5-cc6c-46b4-b31a-65f7c2af01e5","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"b2d45c0a-d1ba-4949-b418-95d49ee361f7","timestamp":"2024-02-16T17:03:47.198657381Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,191] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:56.116+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:04:40 policy-apex-pdp | reconnect.backoff.ms = 50 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:41.554+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 5262 ms 17:04:40 policy-pap | [2024-02-16T17:03:14.163+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 49 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 17:04:40 policy-pap | [2024-02-16T17:03:14.166+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 17:04:40 policy-clamp-ac-pf-ppnt | allow.auto.create.topics = true 17:04:40 policy-db-migrator | upgrade: 0 -> 1300 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | auto.offset.reset = latest 17:04:40 simulator | 2024-02-16 17:02:28,383 INFO Started o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE} 17:04:40 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.262+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,191] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-api | [2024-02-16T17:02:56.230+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 17:04:40 policy-apex-pdp | request.timeout.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:44.197+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-pap | [2024-02-16T17:03:16.697+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:04:40 policy-pap | [2024-02-16T17:03:16.837+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 126 ms. Found 7 JPA repository interfaces. 17:04:40 policy-clamp-ac-pf-ppnt | auto.commit.interval.ms = 5000 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 simulator | 2024-02-16 17:02:28,386 INFO Started SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 17:04:40 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:40 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"29e7ea94-9dca-43f0-a2b0-3f661708aa9f","timestamp":"2024-02-16T17:03:47.211534907Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,192] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 17:04:40 policy-api | [2024-02-16T17:02:56.234+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 17:04:40 policy-apex-pdp | retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | allow.auto.create.topics = true 17:04:40 policy-pap | [2024-02-16T17:03:17.496+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 17:04:40 policy-pap | [2024-02-16T17:03:17.496+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 17:04:40 policy-clamp-ac-pf-ppnt | auto.include.jmx.reporter = true 17:04:40 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 17:04:40 kafka | [2024-02-16 17:02:29,662] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | check.crcs = true 17:04:40 simulator | 2024-02-16 17:02:28,387 INFO Started Server@62452cc9{STARTING}[11.0.20,sto=0] @2085ms 17:04:40 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.267+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,192] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 17:04:40 policy-api | [2024-02-16T17:02:56.282+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 17:04:40 policy-apex-pdp | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-ac-sim-ppnt | auto.commit.interval.ms = 5000 17:04:40 policy-pap | [2024-02-16T17:03:18.557+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:04:40 policy-pap | [2024-02-16T17:03:18.570+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:04:40 policy-clamp-ac-pf-ppnt | auto.offset.reset = latest 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:29,665] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:40 simulator | 2024-02-16 17:02:28,387 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4915 ms. 17:04:40 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"c5bce600-be93-48d6-9321-677a64168aee","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"757abd77-5d53-4c4b-b040-1abc1768cd48","timestamp":"2024-02-16T17:03:47.217554434Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,193] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:40 policy-api | [2024-02-16T17:02:56.665+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 17:04:40 policy-apex-pdp | sasl.jaas.config = null 17:04:40 policy-clamp-ac-sim-ppnt | auto.include.jmx.reporter = true 17:04:40 policy-pap | [2024-02-16T17:03:18.573+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:04:40 policy-pap | [2024-02-16T17:03:18.574+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 17:04:40 policy-clamp-ac-pf-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 17:04:40 kafka | [2024-02-16 17:02:29,669] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 17:04:40 policy-clamp-ac-http-ppnt | client.id = consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-1 17:04:40 mariadb | 17:04:40 simulator | 2024-02-16 17:02:28,388 INFO org.onap.policy.models.simulators starting SO simulator 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.267+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,193] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:40 policy-api | [2024-02-16T17:02:56.691+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:04:40 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-ac-sim-ppnt | auto.offset.reset = latest 17:04:40 policy-pap | [2024-02-16T17:03:18.730+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 17:04:40 policy-pap | [2024-02-16T17:03:18.730+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 4404 ms 17:04:40 policy-clamp-ac-pf-ppnt | check.crcs = true 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:29,674] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 17:04:40 policy-clamp-ac-http-ppnt | client.rack = 17:04:40 simulator | 2024-02-16 17:02:28,391 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:04:40 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 17:04:40 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"96f16c87-93d1-40e8-89e7-ca9ee0be53f1","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"90887c49-7ec2-421b-9586-f22755afb378","timestamp":"2024-02-16T17:03:47.203750282Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,193] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:40 policy-api | [2024-02-16T17:02:56.817+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7fd26ad8 17:04:40 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-ac-sim-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 policy-pap | [2024-02-16T17:03:19.281+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:04:40 policy-pap | [2024-02-16T17:03:19.377+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 17:04:40 policy-clamp-ac-pf-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:29,682] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 17:04:40 policy-clamp-ac-http-ppnt | connections.max.idle.ms = 540000 17:04:40 simulator | 2024-02-16 17:02:28,392 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.743+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,193] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:40 policy-api | [2024-02-16T17:02:56.820+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:04:40 policy-apex-pdp | sasl.kerberos.service.name = null 17:04:40 policy-clamp-ac-sim-ppnt | check.crcs = true 17:04:40 policy-pap | [2024-02-16T17:03:19.381+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 17:04:40 policy-clamp-runtime-acm | =========|_|==============|___/=/_/_/_/ 17:04:40 policy-clamp-ac-pf-ppnt | client.id = consumer-97317da4-3ba6-4109-8e73-20dc2312d257-1 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:29,697] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 17:04:40 policy-clamp-ac-http-ppnt | default.api.timeout.ms = 60000 17:04:40 simulator | 2024-02-16 17:02:28,392 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,193] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:40 policy-clamp-ac-k8s-ppnt | {"participantDefinitionUpdates":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"Ericsson","startPhase":0},"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the operational policy for Performance Management Subscription Handling"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Starter"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Bridge"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Sink"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Starter microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Bridge microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Sink microservice"},"outProperties":{}}]}],"messageType":"PARTICIPANT_PRIME","messageId":"fc518aed-3741-43ec-b597-0cd9ccf000cb","timestamp":"2024-02-16T17:03:47.714617694Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-api | [2024-02-16T17:02:56.860+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 17:04:40 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-ac-sim-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 policy-pap | [2024-02-16T17:03:19.435+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 17:04:40 policy-pap | [2024-02-16T17:03:19.821+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 17:04:40 policy-clamp-ac-pf-ppnt | client.rack = 17:04:40 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 17:04:40 kafka | [2024-02-16 17:02:29,698] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 17:04:40 policy-clamp-ac-http-ppnt | enable.auto.commit = true 17:04:40 simulator | 2024-02-16 17:02:28,393 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 17:04:40 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,193] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.753+00:00|INFO|network|pool-4-thread-1] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-api | [2024-02-16T17:02:56.862+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 17:04:40 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-ac-sim-ppnt | client.id = consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-1 17:04:40 policy-pap | [2024-02-16T17:03:19.844+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:04:40 policy-pap | [2024-02-16T17:03:19.955+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@37e0292a 17:04:40 policy-clamp-ac-pf-ppnt | connections.max.idle.ms = 540000 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:29,707] INFO Socket connection established, initiating session, client: /172.17.0.5:56962, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 17:04:40 policy-clamp-ac-http-ppnt | exclude.internal.topics = true 17:04:40 simulator | 2024-02-16 17:02:28,395 INFO Session workerName=node0 17:04:40 mariadb | 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,196] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-clamp-ac-k8s-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-api | [2024-02-16T17:02:59.171+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 17:04:40 policy-apex-pdp | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-sim-ppnt | client.rack = 17:04:40 policy-pap | [2024-02-16T17:03:19.957+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:04:40 policy-pap | [2024-02-16T17:03:19.990+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 17:04:40 policy-clamp-ac-pf-ppnt | default.api.timeout.ms = 60000 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 17:04:40 kafka | [2024-02-16 17:02:29,753] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000554810000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 17:04:40 policy-clamp-ac-http-ppnt | fetch.max.bytes = 52428800 17:04:40 simulator | 2024-02-16 17:02:28,450 INFO Using GSON for REST calls 17:04:40 mariadb | 2024-02-16 17:02:33+00:00 [Note] [Entrypoint]: Stopping temporary server 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,196] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.768+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-api | [2024-02-16T17:02:59.174+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:04:40 policy-apex-pdp | sasl.login.class = null 17:04:40 policy-clamp-ac-sim-ppnt | connections.max.idle.ms = 540000 17:04:40 policy-pap | [2024-02-16T17:03:19.991+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 17:04:40 policy-pap | [2024-02-16T17:03:22.082+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 17:04:40 policy-clamp-ac-pf-ppnt | enable.auto.commit = true 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:29,885] INFO Session: 0x100000554810000 closed (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-clamp-ac-http-ppnt | fetch.max.wait.ms = 500 17:04:40 simulator | 2024-02-16 17:02:28,463 INFO Started o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE} 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,196] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 17:04:40 policy-clamp-ac-k8s-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-api | [2024-02-16T17:03:00.964+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 17:04:40 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | default.api.timeout.ms = 60000 17:04:40 policy-pap | [2024-02-16T17:03:22.087+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:04:40 policy-pap | [2024-02-16T17:03:22.685+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 17:04:40 policy-clamp-ac-pf-ppnt | exclude.internal.topics = true 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:29,885] INFO EventThread shut down for session: 0x100000554810000 (org.apache.zookeeper.ClientCnxn) 17:04:40 simulator | 2024-02-16 17:02:28,465 INFO Started SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: FTS optimize thread exiting. 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,196] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.836+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-api | [2024-02-16T17:03:04.304+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 17:04:40 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | enable.auto.commit = true 17:04:40 policy-pap | [2024-02-16T17:03:23.139+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 17:04:40 policy-clamp-runtime-acm | :: Spring Boot :: (v3.1.8) 17:04:40 policy-clamp-ac-pf-ppnt | fetch.max.bytes = 52428800 17:04:40 policy-db-migrator | 17:04:40 kafka | Using log4j config /etc/kafka/log4j.properties 17:04:40 kafka | ===> Launching ... 17:04:40 simulator | 2024-02-16 17:02:28,465 INFO Started Server@488eb7f2{STARTING}[11.0.20,sto=0] @2163ms 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Starting shutdown... 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,197] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 policy-clamp-ac-k8s-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-api | [2024-02-16T17:03:06.644+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:04:40 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-ac-sim-ppnt | exclude.internal.topics = true 17:04:40 policy-clamp-runtime-acm | 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:30.255+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:04:40 policy-clamp-ac-pf-ppnt | fetch.max.wait.ms = 500 17:04:40 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 17:04:40 policy-clamp-ac-http-ppnt | fetch.min.bytes = 1 17:04:40 kafka | ===> Launching kafka ... 17:04:40 simulator | 2024-02-16 17:02:28,465 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4927 ms. 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,221] INFO Logging initialized @653ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:47.846+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-api | [2024-02-16T17:03:06.952+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@4295b0b8, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@ab8b1ef, org.springframework.security.web.context.SecurityContextHolderFilter@6aca85da, org.springframework.security.web.header.HeaderWriterFilter@5ca4763f, org.springframework.security.web.authentication.logout.LogoutFilter@2f29400e, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@53d257e7, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1d123972, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7b3d759f, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3413effc, org.springframework.security.web.access.ExceptionTranslationFilter@5a487b86, org.springframework.security.web.access.intercept.AuthorizationFilter@3341ba8e] 17:04:40 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-ac-sim-ppnt | fetch.max.bytes = 52428800 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:30.306+00:00|INFO|Application|main] Starting Application using Java 17.0.10 with PID 66 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:30.308+00:00|INFO|Application|main] No active profile set, falling back to 1 default profile: "default" 17:04:40 policy-clamp-ac-pf-ppnt | fetch.min.bytes = 1 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | group.id = 32e809a3-a7c0-4e13-b7a3-aa811059e0bc 17:04:40 kafka | [2024-02-16 17:02:30,604] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 17:04:40 simulator | 2024-02-16 17:02:28,465 INFO org.onap.policy.models.simulators starting VFC simulator 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Buffer pool(s) dump completed at 240216 17:02:33 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,379] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 17:04:40 policy-clamp-ac-k8s-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-api | [2024-02-16T17:03:08.096+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:04:40 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-sim-ppnt | fetch.max.wait.ms = 500 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:31.743+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:31.984+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 230 ms. Found 5 JPA repository interfaces. 17:04:40 policy-clamp-ac-pf-ppnt | group.id = 97317da4-3ba6-4109-8e73-20dc2312d257 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 17:04:40 policy-clamp-ac-http-ppnt | group.instance.id = null 17:04:40 kafka | [2024-02-16 17:02:30,970] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 17:04:40 simulator | 2024-02-16 17:02:28,468 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,379] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:53.715+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-api | [2024-02-16T17:03:08.202+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:04:40 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-ac-sim-ppnt | fetch.min.bytes = 1 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:33.464+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.runtime.supervision.SupervisionAspect 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:34.123+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:04:40 policy-clamp-ac-pf-ppnt | group.instance.id = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | heartbeat.interval.ms = 3000 17:04:40 kafka | [2024-02-16 17:02:31,051] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 17:04:40 simulator | 2024-02-16 17:02:28,468 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Shutdown completed; log sequence number 330921; transaction id 298 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,399] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 17:04:40 policy-api | [2024-02-16T17:03:08.226+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 17:04:40 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | group.id = 6a2107c9-1f65-47c8-af5c-8c5cc7111397 17:04:40 policy-clamp-ac-k8s-ppnt | {"participantUpdatesList":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","acElementList":[{"id":"709c62b3-8918-41b9-a747-d21eb79c6c20","definition":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"orderedState":"DEPLOY","toscaServiceTemplateFragment":{"data_types":{"onap.datatypes.ToscaConceptIdentifier":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","required":true},"version":{"name":"version","type":"string","type_version":"0.0.0","required":true}},"name":"onap.datatypes.ToscaConceptIdentifier","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EngineService":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the engine name","default":"ApexEngineService","required":false},"version":{"name":"version","type":"string","type_version":"0.0.0","description":"Specifies the engine version in double dotted format","default":"1.0.0","required":false},"id":{"name":"id","type":"integer","type_version":"0.0.0","description":"Specifies the engine id","required":true},"instance_count":{"name":"instance_count","type":"integer","type_version":"0.0.0","description":"Specifies the number of engine threads that should be run","required":true},"deployment_port":{"name":"deployment_port","type":"integer","type_version":"0.0.0","description":"Specifies the port to connect to for engine administration","default":1.0,"required":false},"policy_model_file_name":{"name":"policy_model_file_name","type":"string","type_version":"0.0.0","description":"The name of the file from which to read the APEX policy model","required":false},"policy_type_impl":{"name":"policy_type_impl","type":"string","type_version":"0.0.0","description":"The policy type implementation from which to read the APEX policy model","required":false},"periodic_event_period":{"name":"periodic_event_period","type":"string","type_version":"0.0.0","description":"The time interval in milliseconds for the periodic scanning event, 0 means don't scan","required":false},"engine":{"name":"engine","type":"onap.datatypes.native.apex.engineservice.Engine","type_version":"0.0.0","description":"The parameters for all engines in the APEX engine service","required":true}},"name":"onap.datatypes.native.apex.EngineService","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventHandler":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the event handler name, if not specified this is set to the key name","required":false},"carrier_technology":{"name":"carrier_technology","type":"onap.datatypes.native.apex.CarrierTechnology","type_version":"0.0.0","description":"Specifies the carrier technology of the event handler (such as REST/Web Socket/Kafka)","required":true},"event_protocol":{"name":"event_protocol","type":"onap.datatypes.native.apex.EventProtocol","type_version":"0.0.0","description":"Specifies the event protocol of events for the event handler (such as Yaml/JSON/XML/POJO)","required":true},"event_name":{"name":"event_name","type":"string","type_version":"0.0.0","description":"Specifies the event name for events on this event handler, if not specified, the event name is read from or written to the event being received or sent","required":false},"event_name_filter":{"name":"event_name_filter","type":"string","type_version":"0.0.0","description":"Specifies a filter as a regular expression, events that do not match the filter are dropped, the default is to let all events through","required":false},"synchronous_mode":{"name":"synchronous_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is syncronous (receive event and send response)","default":false,"required":false},"synchronous_peer":{"name":"synchronous_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in synchronous mode, this parameter is mandatory if the event handler is in synchronous mode","required":false},"synchronous_timeout":{"name":"synchronous_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for responses to be issued by APEX torequests, this parameter is mandatory if the event handler is in synchronous mode","required":false},"requestor_mode":{"name":"requestor_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is in requestor mode (send event and wait for response mode)","default":false,"required":false},"requestor_peer":{"name":"requestor_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in requestor mode, this parameter is mandatory if the event handler is in requestor mode","required":false},"requestor_timeout":{"name":"requestor_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for wait for responses to requests, this parameter is mandatory if the event handler is in requestor mode","required":false}},"name":"onap.datatypes.native.apex.EventHandler","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.CarrierTechnology":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the carrier technology (such as REST, Kafka, WebSocket)","required":true},"plugin_parameter_class_name":{"name":"plugin_parameter_class_name","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of event input or output for this carrier technology, defaults to the supplied input or output class","required":false}},"name":"onap.datatypes.native.apex.CarrierTechnology","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventProtocol":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the event protocol (such as Yaml, JSON, XML, or POJO)","required":true},"event_protocol_plugin_class":{"name":"event_protocol_plugin_class","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of the event protocol for this carrier technology, defaults to the supplied event protocol class","required":false}},"name":"onap.datatypes.native.apex.EventProtocol","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Environment":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the environment variable","required":true},"value":{"name":"value","type":"string","type_version":"0.0.0","description":"The value of the environment variable","required":true}},"name":"onap.datatypes.native.apex.Environment","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.Engine":{"properties":{"context":{"name":"context","type":"onap.datatypes.native.apex.engineservice.engine.Context","type_version":"0.0.0","description":"The properties for handling context in APEX engines, defaults to using Java maps for context","required":false},"executors":{"name":"executors","type":"map","type_version":"0.0.0","description":"The plugins for policy executors used in engines such as javascript, MVEL, Jython","required":true,"entry_schema":{"type":"string","type_version":"0.0.0","description":"The plugin class path for this policy executor"}}},"name":"onap.datatypes.native.apex.engineservice.Engine","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.engine.Context":{"properties":{"distributor":{"name":"distributor","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for distributing context between APEX PDPs at runtime","required":false},"schemas":{"name":"schemas","type":"map","type_version":"0.0.0","description":"The plugins for context schemas available in APEX PDPs such as Java and Avro","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0"}},"locking":{"name":"locking","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for locking context in and between APEX PDPs at runtime","required":false},"persistence":{"name":"persistence","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for persisting context for APEX PDPs at runtime","required":false}},"name":"onap.datatypes.native.apex.engineservice.engine.Context","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Plugin":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the executor such as Javascript, Jython or MVEL","required":true},"plugin_class_name":{"name":"plugin_class_name","type":"string","type_version":"0.0.0","description":"The class path of the plugin class for this executor","required":false}},"name":"onap.datatypes.native.apex.Plugin","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest":{"properties":{"restRequestId":{"name":"restRequestId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a REST request to be sent to a REST endpoint","required":true},"httpMethod":{"name":"httpMethod","type":"string","type_version":"0.0.0","description":"The REST method to use","required":true,"constraints":[{"valid_values":["POST","PUT","GET","DELETE"]}]},"path":{"name":"path","type":"string","type_version":"0.0.0","description":"The path of the REST request relative to the base URL","required":true},"body":{"name":"body","type":"string","type_version":"0.0.0","description":"The body of the REST request for PUT and POST requests","required":false},"expectedResponse":{"name":"expectedResponse","type":"integer","type_version":"0.0.0","description":"THe expected HTTP status code for the REST request","required":true,"constraints":[]}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity":{"properties":{"configurationEntityId":{"name":"configurationEntityId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a Configuration Entity to be handled by the HTTP Automation Composition Element","required":true},"restSequence":{"name":"restSequence","type":"list","type_version":"0.0.0","description":"A sequence of REST commands to send to the REST endpoint","required":false,"entry_schema":{"type":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","type_version":"1.0.0"}}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}}},"policy_types":{"onap.policies.Native":{"name":"onap.policies.Native","version":"1.0.0","derived_from":"tosca.policies.Root","metadata":{},"description":"a base policy type for all native PDP policies"},"onap.policies.native.Apex":{"properties":{"engine_service":{"name":"engine_service","type":"onap.datatypes.native.apex.EngineService","type_version":"0.0.0","description":"APEX Engine Service Parameters","required":false},"inputs":{"name":"inputs","type":"map","type_version":"0.0.0","description":"Inputs for handling events coming into the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"outputs":{"name":"outputs","type":"map","type_version":"0.0.0","description":"Outputs for handling events going out of the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"environment":{"name":"environment","type":"list","type_version":"0.0.0","description":"Envioronmental parameters for the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Environment","type_version":"0.0.0"}}},"name":"onap.policies.native.Apex","version":"1.0.0","derived_from":"onap.policies.Native","metadata":{},"description":"a policy type for native apex policies"}},"topology_template":{"policies":[{"onap.policies.native.apex.ac.element":{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}}]},"name":"NULL","version":"0.0.0"},"properties":{"policy_type_id":{"name":"onap.policies.native.Apex","version":"1.0.0"},"policy_id":{"get_input":"acm_element_policy"}}}]}],"startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_DEPLOY","messageId":"46590b55-0c49-46ae-b243-90cfb0a03d4c","timestamp":"2024-02-16T17:03:53.679413689Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:34.133+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:34.135+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:04:40 policy-clamp-ac-pf-ppnt | heartbeat.interval.ms = 3000 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | interceptor.classes = [] 17:04:40 kafka | [2024-02-16 17:02:31,053] INFO starting (kafka.server.KafkaServer) 17:04:40 simulator | 2024-02-16 17:02:28,469 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] mariadbd: Shutdown complete 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,428] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 17:04:40 policy-api | [2024-02-16T17:03:08.245+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 21.503 seconds (process running for 23.571) 17:04:40 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | group.instance.id = null 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:57.697+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:34.135+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:34.235+00:00|INFO|[/onap/policy/clamp/acm]|main] Initializing Spring embedded WebApplicationContext 17:04:40 policy-clamp-ac-pf-ppnt | interceptor.classes = [] 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | internal.leave.group.on.close = true 17:04:40 kafka | [2024-02-16 17:02:31,054] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 17:04:40 simulator | 2024-02-16 17:02:28,470 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 17:04:40 mariadb | 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,428] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 17:04:40 policy-api | [2024-02-16T17:03:54.256+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:04:40 policy-apex-pdp | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-ac-sim-ppnt | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[{"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","deployState":"UNDEPLOYED","lockState":"NONE","elements":[{"automationCompositionElementId":"709c62b3-8918-41b9-a747-d21eb79c6c20","deployState":"DEPLOYING","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{}}]}],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"9248b3f5-302a-4c92-b6c4-812b252c6967","timestamp":"2024-02-16T17:03:57.674696111Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:34.235+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3839 ms 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.300+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:04:40 policy-clamp-ac-pf-ppnt | internal.leave.group.on.close = true 17:04:40 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 17:04:40 policy-clamp-ac-http-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 kafka | [2024-02-16 17:02:31,069] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 17:04:40 simulator | 2024-02-16 17:02:28,473 INFO Session workerName=node0 17:04:40 mariadb | 2024-02-16 17:02:33+00:00 [Note] [Entrypoint]: Temporary server stopped 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,430] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 17:04:40 policy-api | [2024-02-16T17:03:54.256+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 17:04:40 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-ac-sim-ppnt | interceptor.classes = [] 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:03:57.731+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.386+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.389+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 17:04:40 policy-clamp-ac-pf-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | isolation.level = read_uncommitted 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 17:04:40 simulator | 2024-02-16 17:02:28,513 INFO Using GSON for REST calls 17:04:40 mariadb | 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,441] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 17:04:40 policy-api | [2024-02-16T17:03:54.258+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 1 ms 17:04:40 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-ac-sim-ppnt | internal.leave.group.on.close = true 17:04:40 policy-clamp-ac-k8s-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DEPLOYED","lockState":"LOCKED","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deployed"}},"responseTo":"46590b55-0c49-46ae-b243-90cfb0a03d4c","result":true,"stateChangeResult":"NO_ERROR","message":"Deployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.434+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.802+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 17:04:40 policy-clamp-ac-pf-ppnt | isolation.level = read_uncommitted 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 17:04:40 policy-clamp-ac-http-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:host.name=d35b1e98ac95 (org.apache.zookeeper.ZooKeeper) 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 17:04:40 mariadb | 2024-02-16 17:02:33+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,452] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 17:04:40 policy-api | [2024-02-16T17:03:54.522+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: 17:04:40 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-sim-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:21.338+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.828+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.955+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@56f730b2 17:04:40 policy-clamp-ac-pf-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | max.partition.fetch.bytes = 1048576 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 17:04:40 simulator | 2024-02-16 17:02:28,521 INFO Started o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE} 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,481] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 17:04:40 policy-api | [] 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-ac-sim-ppnt | isolation.level = read_uncommitted 17:04:40 policy-clamp-ac-k8s-ppnt | {"deployOrderedState":"UNDEPLOY","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","timestamp":"2024-02-16T17:04:21.322183159Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.957+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.996+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 17:04:40 policy-clamp-ac-pf-ppnt | max.partition.fetch.bytes = 1048576 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | max.poll.interval.ms = 300000 17:04:40 mariadb | 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,482] INFO Started @914ms (org.eclipse.jetty.server.Server) 17:04:40 policy-api | [2024-02-16T17:04:22.226+00:00|WARN|CommonRestController|http-nio-6969-exec-7] DELETE /policytypes/onap.policies.Native/versions/1.0.0 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:22.417+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:35.998+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:37.545+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 17:04:40 policy-clamp-ac-pf-ppnt | max.poll.interval.ms = 300000 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | max.poll.records = 500 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 17:04:40 simulator | 2024-02-16 17:02:28,522 INFO Started VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,482] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 17:04:40 policy-api | [2024-02-16T17:04:22.399+00:00|WARN|CommonRestController|http-nio-6969-exec-8] DELETE /policytypes/onap.policies.native.Apex/versions/1.0.0 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-ac-k8s-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"UNDEPLOYED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Undeployed"}},"responseTo":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","result":true,"stateChangeResult":"NO_ERROR","message":"Undeployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:37.808+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:38.323+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.models.acm.persistence.repository.AutomationCompositionRepository 17:04:40 policy-clamp-ac-pf-ppnt | max.poll.records = 500 17:04:40 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 17:04:40 policy-clamp-ac-http-ppnt | metadata.max.age.ms = 300000 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 17:04:40 simulator | 2024-02-16 17:02:28,522 INFO Started Server@6035b93b{STARTING}[11.0.20,sto=0] @2221ms 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,488] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-ac-sim-ppnt | max.poll.interval.ms = 300000 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.704+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:38.453+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.models.acm.persistence.repository.AutomationCompositionElementRepository 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:38.530+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.models.acm.persistence.repository.NodeTemplateStateRepository 17:04:40 policy-clamp-ac-pf-ppnt | metadata.max.age.ms = 300000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | metric.reporters = [] 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Number of transaction pools: 1 17:04:40 simulator | 2024-02-16 17:02:28,523 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4946 ms. 17:04:40 kafka | [2024-02-16 17:02:31,073] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,489] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 17:04:40 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-clamp-ac-sim-ppnt | max.poll.records = 500 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:38.878+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-clamp-runtime-acm | allow.auto.create.topics = true 17:04:40 policy-clamp-ac-k8s-ppnt | {"deployOrderedState":"DELETE","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","timestamp":"2024-02-16T17:04:26.693700299Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-pf-ppnt | metric.reporters = [] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 17:04:40 policy-clamp-ac-http-ppnt | metrics.num.samples = 2 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 17:04:40 simulator | 2024-02-16 17:02:28,524 INFO org.onap.policy.models.simulators started 17:04:40 kafka | [2024-02-16 17:02:31,074] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,491] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 17:04:40 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-ac-sim-ppnt | metadata.max.age.ms = 300000 17:04:40 policy-clamp-runtime-acm | auto.commit.interval.ms = 5000 17:04:40 policy-clamp-runtime-acm | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.710+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | metrics.num.samples = 2 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | metrics.recording.level = INFO 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 17:04:40 kafka | [2024-02-16 17:02:31,074] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,492] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 17:04:40 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-clamp-ac-sim-ppnt | metric.reporters = [] 17:04:40 policy-clamp-runtime-acm | auto.offset.reset = latest 17:04:40 policy-clamp-runtime-acm | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-k8s-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-clamp-ac-pf-ppnt | metrics.recording.level = INFO 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | metrics.sample.window.ms = 30000 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:04:40 kafka | [2024-02-16 17:02:31,074] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,512] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 17:04:40 policy-apex-pdp | security.protocol = PLAINTEXT 17:04:40 policy-clamp-ac-sim-ppnt | metrics.num.samples = 2 17:04:40 policy-clamp-runtime-acm | check.crcs = true 17:04:40 policy-clamp-runtime-acm | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.728+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | metrics.sample.window.ms = 30000 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:04:40 kafka | [2024-02-16 17:02:31,074] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,512] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 17:04:40 policy-apex-pdp | security.providers = null 17:04:40 policy-clamp-ac-sim-ppnt | metrics.recording.level = INFO 17:04:40 policy-clamp-runtime-acm | client.id = consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-1 17:04:40 policy-clamp-runtime-acm | client.rack = 17:04:40 policy-clamp-ac-k8s-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DELETED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deleted"}},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Deleted","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-pf-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 17:04:40 policy-clamp-ac-http-ppnt | receive.buffer.bytes = 65536 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 17:04:40 kafka | [2024-02-16 17:02:31,074] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,514] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 17:04:40 policy-apex-pdp | send.buffer.bytes = 131072 17:04:40 policy-clamp-ac-sim-ppnt | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-runtime-acm | connections.max.idle.ms = 540000 17:04:40 policy-clamp-runtime-acm | default.api.timeout.ms = 60000 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.737+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | receive.buffer.bytes = 65536 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: Completed initialization of buffer pool 17:04:40 kafka | [2024-02-16 17:02:31,074] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,514] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 17:04:40 policy-apex-pdp | session.timeout.ms = 45000 17:04:40 policy-clamp-ac-sim-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-clamp-runtime-acm | enable.auto.commit = true 17:04:40 policy-clamp-runtime-acm | exclude.internal.topics = true 17:04:40 policy-clamp-ac-k8s-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-clamp-ac-pf-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 17:04:40 policy-clamp-ac-http-ppnt | reconnect.backoff.ms = 50 17:04:40 mariadb | 2024-02-16 17:02:33 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 17:04:40 kafka | [2024-02-16 17:02:31,074] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,520] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 17:04:40 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | receive.buffer.bytes = 65536 17:04:40 policy-clamp-runtime-acm | fetch.max.bytes = 52428800 17:04:40 policy-clamp-runtime-acm | fetch.max.wait.ms = 500 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.737+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | reconnect.backoff.ms = 50 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | request.timeout.ms = 30000 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] InnoDB: 128 rollback segments are active. 17:04:40 kafka | [2024-02-16 17:02:31,074] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 17:04:40 kafka | [2024-02-16 17:02:31,076] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 17:04:40 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-runtime-acm | fetch.min.bytes = 1 17:04:40 policy-clamp-runtime-acm | group.id = 0b0f93e1-9727-45a5-b97d-714a24b64a62 17:04:40 policy-clamp-runtime-acm | group.instance.id = null 17:04:40 policy-clamp-ac-pf-ppnt | request.timeout.ms = 30000 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | retry.backoff.ms = 100 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 17:04:40 kafka | [2024-02-16 17:02:31,080] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,520] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:04:40 policy-apex-pdp | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-sim-ppnt | reconnect.backoff.ms = 50 17:04:40 policy-clamp-runtime-acm | heartbeat.interval.ms = 3000 17:04:40 policy-pap | [2024-02-16T17:03:23.236+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 17:04:40 policy-clamp-ac-k8s-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-clamp-ac-pf-ppnt | retry.backoff.ms = 100 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.client.callback.handler.class = null 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 17:04:40 kafka | [2024-02-16 17:02:31,086] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,524] INFO Snapshot loaded in 10 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 17:04:40 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-ac-sim-ppnt | request.timeout.ms = 30000 17:04:40 policy-pap | [2024-02-16T17:03:23.567+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-pap | allow.auto.create.topics = true 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.737+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.client.callback.handler.class = null 17:04:40 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 17:04:40 policy-clamp-ac-http-ppnt | sasl.jaas.config = null 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] InnoDB: log sequence number 330921; transaction id 299 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,525] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:04:40 kafka | [2024-02-16 17:02:31,087] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 17:04:40 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-ac-sim-ppnt | retry.backoff.ms = 100 17:04:40 policy-pap | auto.commit.interval.ms = 5000 17:04:40 policy-pap | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-k8s-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-clamp-ac-pf-ppnt | sasl.jaas.config = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,525] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:40 kafka | [2024-02-16 17:02:31,095] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 17:04:40 policy-apex-pdp | ssl.engine.factory.class = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.client.callback.handler.class = null 17:04:40 policy-pap | auto.offset.reset = latest 17:04:40 policy-pap | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.976+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] Plugin 'FEEDBACK' is disabled. 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,536] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 17:04:40 kafka | [2024-02-16 17:02:31,104] INFO Socket connection established, initiating session, client: /172.17.0.5:56964, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 17:04:40 policy-apex-pdp | ssl.key.password = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.jaas.config = null 17:04:40 policy-pap | check.crcs = true 17:04:40 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-k8s-ppnt | {"messageType":"PARTICIPANT_PRIME","messageId":"06743202-529d-44dd-aee6-94cbebea181c","timestamp":"2024-02-16T17:04:26.969103751Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.service.name = null 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,536] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 17:04:40 kafka | [2024-02-16 17:02:31,190] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000554810001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 17:04:40 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-pap | client.id = consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-1 17:04:40 policy-pap | client.rack = 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.979+00:00|INFO|network|pool-4-thread-2] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.service.name = null 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,559] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 17:04:40 kafka | [2024-02-16 17:02:31,197] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 17:04:40 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-pap | connections.max.idle.ms = 540000 17:04:40 policy-pap | default.api.timeout.ms = 60000 17:04:40 policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] Server socket created on IP: '0.0.0.0'. 17:04:40 zookeeper_1 | [2024-02-16 17:02:27,560] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 17:04:40 kafka | [2024-02-16 17:02:32,374] INFO Cluster ID = vB0B1qTrTYKUb3QN_6Wq6A (kafka.server.KafkaServer) 17:04:40 policy-apex-pdp | ssl.keystore.key = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.service.name = null 17:04:40 policy-pap | enable.auto.commit = true 17:04:40 policy-pap | exclude.internal.topics = true 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.992+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.callback.handler.class = null 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] Server socket created on IP: '::'. 17:04:40 zookeeper_1 | [2024-02-16 17:02:29,723] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 17:04:40 kafka | [2024-02-16 17:02:32,377] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 17:04:40 policy-apex-pdp | ssl.keystore.location = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-pap | fetch.max.bytes = 52428800 17:04:40 policy-pap | fetch.max.wait.ms = 500 17:04:40 policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.callback.handler.class = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.class = null 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] mariadbd: ready for connections. 17:04:40 kafka | [2024-02-16 17:02:32,439] INFO KafkaConfig values: 17:04:40 policy-apex-pdp | ssl.keystore.password = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-pap | fetch.min.bytes = 1 17:04:40 policy-pap | group.id = 084a2e58-01c1-4612-9881-9e51d9ffa3ed 17:04:40 policy-clamp-runtime-acm | interceptor.classes = [] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.class = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 17:04:40 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 17:04:40 policy-apex-pdp | ssl.keystore.type = JKS 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:26.995+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | group.instance.id = null 17:04:40 policy-clamp-runtime-acm | internal.leave.group.on.close = true 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.read.timeout.ms = null 17:04:40 mariadb | 2024-02-16 17:02:34 0 [Note] InnoDB: Buffer pool(s) load completed at 240216 17:02:34 17:04:40 kafka | alter.config.policy.class.name = null 17:04:40 policy-apex-pdp | ssl.protocol = TLSv1.3 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.class = null 17:04:40 policy-pap | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-runtime-acm | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.read.timeout.ms = null 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 mariadb | 2024-02-16 17:02:34 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 17:04:40 kafka | alter.log.dirs.replication.quota.window.num = 11 17:04:40 policy-apex-pdp | ssl.provider = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 policy-pap | interceptor.classes = [] 17:04:40 policy-clamp-runtime-acm | isolation.level = read_uncommitted 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 mariadb | 2024-02-16 17:02:34 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 17:04:40 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 17:04:40 policy-apex-pdp | ssl.secure.random.implementation = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.read.timeout.ms = null 17:04:40 policy-pap | internal.leave.group.on.close = true 17:04:40 policy-clamp-runtime-acm | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 mariadb | 2024-02-16 17:02:35 27 [Warning] Aborted connection 27 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 17:04:40 kafka | authorizer.class.name = 17:04:40 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-runtime-acm | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 mariadb | 2024-02-16 17:02:35 34 [Warning] Aborted connection 34 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.13' (This connection closed normally without authentication) 17:04:40 kafka | auto.create.topics.enable = true 17:04:40 policy-apex-pdp | ssl.truststore.certificates = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-pap | isolation.level = read_uncommitted 17:04:40 policy-clamp-runtime-acm | max.poll.interval.ms = 300000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 mariadb | 2024-02-16 17:02:36 63 [Warning] Aborted connection 63 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.14' (This connection closed normally without authentication) 17:04:40 kafka | auto.include.jmx.reporter = true 17:04:40 policy-apex-pdp | ssl.truststore.location = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-runtime-acm | max.poll.records = 500 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 kafka | auto.leader.rebalance.enable = true 17:04:40 policy-apex-pdp | ssl.truststore.password = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-pap | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-runtime-acm | metadata.max.age.ms = 300000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.mechanism = GSSAPI 17:04:40 kafka | background.threads = 10 17:04:40 policy-apex-pdp | ssl.truststore.type = JKS 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-pap | max.poll.interval.ms = 300000 17:04:40 policy-clamp-runtime-acm | metric.reporters = [] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.mechanism = GSSAPI 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 kafka | broker.heartbeat.interval.ms = 2000 17:04:40 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 policy-pap | max.poll.records = 500 17:04:40 policy-clamp-runtime-acm | metrics.num.samples = 2 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 kafka | broker.id = 1 17:04:40 policy-apex-pdp | 17:04:40 policy-clamp-ac-sim-ppnt | sasl.mechanism = GSSAPI 17:04:40 policy-pap | metadata.max.age.ms = 300000 17:04:40 policy-clamp-runtime-acm | metrics.recording.level = INFO 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 kafka | broker.id.generation.enable = true 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.371+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-pap | metric.reporters = [] 17:04:40 policy-clamp-runtime-acm | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 kafka | broker.rack = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.371+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 policy-pap | metrics.num.samples = 2 17:04:40 policy-clamp-runtime-acm | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 kafka | broker.session.timeout.ms = 9000 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.371+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103007369 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-pap | metrics.recording.level = INFO 17:04:40 policy-clamp-runtime-acm | receive.buffer.bytes = 65536 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 kafka | client.quota.callback.class = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.374+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-1, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Subscribed to topic(s): policy-pdp-pap 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-ac-k8s-ppnt | [2024-02-16T17:04:27.005+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-runtime-acm | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 kafka | compression.type = producer 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.390+00:00|INFO|ServiceManager|main] service manager starting 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-clamp-runtime-acm | reconnect.backoff.ms = 50 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 kafka | connection.failed.authentication.delay.ms = 100 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.391+00:00|INFO|ServiceManager|main] service manager starting topics 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-pap | receive.buffer.bytes = 65536 17:04:40 policy-clamp-runtime-acm | request.timeout.ms = 30000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 kafka | connections.max.idle.ms = 600000 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.395+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=02b3ddfc-6c0d-4750-8519-6e56d3cb3479, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-pap | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-runtime-acm | retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 kafka | connections.max.reauth.ms = 0 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.422+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-pap | reconnect.backoff.ms = 50 17:04:40 policy-clamp-runtime-acm | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | security.protocol = PLAINTEXT 17:04:40 kafka | control.plane.listener.name = null 17:04:40 policy-apex-pdp | allow.auto.create.topics = true 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-pap | request.timeout.ms = 30000 17:04:40 policy-clamp-runtime-acm | sasl.jaas.config = null 17:04:40 policy-clamp-ac-pf-ppnt | security.protocol = PLAINTEXT 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | security.providers = null 17:04:40 kafka | controlled.shutdown.enable = true 17:04:40 policy-apex-pdp | auto.commit.interval.ms = 5000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-pap | retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-ac-pf-ppnt | security.providers = null 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | send.buffer.bytes = 131072 17:04:40 kafka | controlled.shutdown.max.retries = 3 17:04:40 policy-apex-pdp | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-sim-ppnt | security.protocol = PLAINTEXT 17:04:40 policy-pap | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-ac-pf-ppnt | send.buffer.bytes = 131072 17:04:40 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 17:04:40 policy-clamp-ac-http-ppnt | session.timeout.ms = 45000 17:04:40 kafka | controlled.shutdown.retry.backoff.ms = 5000 17:04:40 policy-apex-pdp | auto.offset.reset = latest 17:04:40 policy-clamp-ac-sim-ppnt | security.providers = null 17:04:40 policy-pap | sasl.jaas.config = null 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.service.name = null 17:04:40 policy-clamp-ac-pf-ppnt | session.timeout.ms = 45000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 kafka | controller.listener.names = null 17:04:40 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-sim-ppnt | send.buffer.bytes = 131072 17:04:40 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 17:04:40 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 kafka | controller.quorum.append.linger.ms = 25 17:04:40 policy-apex-pdp | check.crcs = true 17:04:40 policy-clamp-ac-sim-ppnt | session.timeout.ms = 45000 17:04:40 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | ssl.cipher.suites = null 17:04:40 kafka | controller.quorum.election.backoff.max.ms = 1000 17:04:40 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-pap | sasl.kerberos.service.name = null 17:04:40 policy-clamp-runtime-acm | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.cipher.suites = null 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 kafka | controller.quorum.election.timeout.ms = 1000 17:04:40 policy-apex-pdp | client.id = consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2 17:04:40 policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-runtime-acm | sasl.login.class = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 kafka | controller.quorum.fetch.timeout.ms = 2000 17:04:40 policy-apex-pdp | client.rack = 17:04:40 policy-clamp-ac-sim-ppnt | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-runtime-acm | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 17:04:40 policy-clamp-ac-http-ppnt | ssl.engine.factory.class = null 17:04:40 kafka | controller.quorum.request.timeout.ms = 2000 17:04:40 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-ac-sim-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-runtime-acm | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.engine.factory.class = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | ssl.key.password = null 17:04:40 kafka | controller.quorum.retry.backoff.ms = 20 17:04:40 policy-apex-pdp | connections.max.idle.ms = 540000 17:04:40 policy-pap | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.engine.factory.class = null 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-ac-pf-ppnt | ssl.key.password = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-clamp-ac-http-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 kafka | controller.quorum.voters = [] 17:04:40 policy-apex-pdp | default.api.timeout.ms = 60000 17:04:40 policy-pap | sasl.login.class = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.key.password = null 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.certificate.chain = null 17:04:40 kafka | controller.quota.window.num = 11 17:04:40 policy-apex-pdp | enable.auto.commit = true 17:04:40 policy-pap | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.certificate.chain = null 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.key = null 17:04:40 kafka | controller.quota.window.size.seconds = 1 17:04:40 policy-apex-pdp | exclude.internal.topics = true 17:04:40 policy-pap | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.key = null 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.location = null 17:04:40 kafka | controller.socket.timeout.ms = 30000 17:04:40 policy-apex-pdp | fetch.max.bytes = 52428800 17:04:40 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.key = null 17:04:40 policy-clamp-runtime-acm | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.location = null 17:04:40 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.password = null 17:04:40 kafka | create.topic.policy.class.name = null 17:04:40 policy-apex-pdp | fetch.max.wait.ms = 500 17:04:40 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.location = null 17:04:40 policy-clamp-runtime-acm | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.password = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.type = JKS 17:04:40 kafka | default.replication.factor = 1 17:04:40 policy-apex-pdp | fetch.min.bytes = 1 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.password = null 17:04:40 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-runtime-acm | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.type = JKS 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 policy-clamp-ac-http-ppnt | ssl.protocol = TLSv1.3 17:04:40 kafka | delegation.token.expiry.check.interval.ms = 3600000 17:04:40 policy-apex-pdp | group.id = 02b3ddfc-6c0d-4750-8519-6e56d3cb3479 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.type = JKS 17:04:40 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-ac-pf-ppnt | ssl.protocol = TLSv1.3 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | ssl.provider = null 17:04:40 kafka | delegation.token.expiry.time.ms = 86400000 17:04:40 policy-apex-pdp | group.instance.id = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.protocol = TLSv1.3 17:04:40 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.provider = null 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | ssl.secure.random.implementation = null 17:04:40 kafka | delegation.token.master.key = null 17:04:40 policy-apex-pdp | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-ac-sim-ppnt | ssl.provider = null 17:04:40 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.secure.random.implementation = null 17:04:40 policy-db-migrator | 17:04:40 kafka | delegation.token.max.lifetime.ms = 604800000 17:04:40 policy-clamp-ac-sim-ppnt | ssl.secure.random.implementation = null 17:04:40 policy-pap | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-ac-pf-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 17:04:40 policy-apex-pdp | interceptor.classes = [] 17:04:40 policy-clamp-ac-http-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 kafka | delegation.token.secret.key = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.certificates = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | internal.leave.group.on.close = true 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.certificates = null 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.location = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.certificates = null 17:04:40 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.location = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 kafka | delete.records.purgatory.purge.interval.requests = 1 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.password = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.location = null 17:04:40 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.password = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | isolation.level = read_uncommitted 17:04:40 kafka | delete.topic.enable = true 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.type = JKS 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.password = null 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.type = JKS 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 kafka | early.start.listeners = null 17:04:40 policy-clamp-ac-http-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.type = JKS 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-ac-pf-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:04:40 kafka | fetch.max.bytes = 57671680 17:04:40 policy-clamp-ac-http-ppnt | 17:04:40 policy-clamp-ac-sim-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-clamp-ac-pf-ppnt | 17:04:40 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 17:04:40 policy-apex-pdp | max.poll.interval.ms = 300000 17:04:40 kafka | fetch.purgatory.purge.interval.requests = 1000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:48.271+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-ac-sim-ppnt | 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-runtime-acm | security.protocol = PLAINTEXT 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:16.043+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | max.poll.records = 500 17:04:40 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:48.271+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:44.447+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-clamp-runtime-acm | security.providers = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:16.044+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 policy-apex-pdp | metadata.max.age.ms = 300000 17:04:40 kafka | group.consumer.heartbeat.interval.ms = 5000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:48.271+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102968268 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:44.448+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-runtime-acm | send.buffer.bytes = 131072 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:16.044+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102996042 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | metric.reporters = [] 17:04:40 kafka | group.consumer.max.heartbeat.interval.ms = 15000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:48.277+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-1, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Subscribed to topic(s): policy-acruntime-participant 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:44.448+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102964445 17:04:40 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-clamp-runtime-acm | session.timeout.ms = 45000 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:16.046+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-1, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Subscribed to topic(s): policy-acruntime-participant 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | metrics.num.samples = 2 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:48.323+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.participant.http.config.MicrometerConfig 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:44.452+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-1, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Subscribed to topic(s): policy-acruntime-participant 17:04:40 policy-pap | security.protocol = PLAINTEXT 17:04:40 policy-clamp-runtime-acm | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:16.054+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.participant.policy.config.MicrometerConfig 17:04:40 policy-db-migrator | 17:04:40 kafka | group.consumer.max.session.timeout.ms = 60000 17:04:40 policy-apex-pdp | metrics.recording.level = INFO 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:44.476+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.participant.sim.config.MicrometerConfig 17:04:40 policy-pap | security.providers = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:49.359+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@7b676112, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5578be42, org.springframework.security.web.context.SecurityContextHolderFilter@15405bd6, org.springframework.security.web.header.HeaderWriterFilter@7e0bc8a3, org.springframework.security.web.authentication.logout.LogoutFilter@7a360554, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@18578491, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@70730db, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@12704e15, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4e49ce2b, org.springframework.security.web.access.ExceptionTranslationFilter@3050ac2f, org.springframework.security.web.access.intercept.AuthorizationFilter@4c2af006] 17:04:40 policy-clamp-runtime-acm | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | group.consumer.max.size = 2147483647 17:04:40 policy-apex-pdp | metrics.sample.window.ms = 30000 17:04:40 policy-pap | send.buffer.bytes = 131072 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:52.629+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '/actuator' 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:45.408+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1caa9eb6, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1f53481b, org.springframework.security.web.context.SecurityContextHolderFilter@3078cac, org.springframework.security.web.header.HeaderWriterFilter@6d5c2745, org.springframework.security.web.authentication.logout.LogoutFilter@176f7f3b, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@33425811, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5f2bd6d9, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@43d9f1a2, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@2fcd7d3f, org.springframework.security.web.access.ExceptionTranslationFilter@56cfe111, org.springframework.security.web.access.intercept.AuthorizationFilter@51ec2856] 17:04:40 policy-clamp-runtime-acm | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:16.659+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@201c3cda, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4c86da0c, org.springframework.security.web.context.SecurityContextHolderFilter@78b612c6, org.springframework.security.web.header.HeaderWriterFilter@4c5228e7, org.springframework.security.web.authentication.logout.LogoutFilter@43a65cd8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@24a86066, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@22752544, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@69d23296, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5d97caa4, org.springframework.security.web.access.ExceptionTranslationFilter@1a6dc589, org.springframework.security.web.access.intercept.AuthorizationFilter@7d0d91a1] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 kafka | group.consumer.min.heartbeat.interval.ms = 5000 17:04:40 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-pap | session.timeout.ms = 45000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.043+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:48.832+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:04:40 policy-clamp-runtime-acm | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.246+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '/actuator' 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | group.consumer.min.session.timeout.ms = 45000 17:04:40 policy-apex-pdp | receive.buffer.bytes = 65536 17:04:40 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.167+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/onap/policy/clamp/acm/httpparticipant' 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:48.993+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:04:40 policy-clamp-runtime-acm | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.366+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:04:40 policy-db-migrator | 17:04:40 kafka | group.consumer.session.timeout.ms = 45000 17:04:40 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:04:40 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.198+00:00|INFO|ServiceManager|main] service manager starting 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.155+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/onap/policy/clamp/acm/simparticipant' 17:04:40 policy-clamp-runtime-acm | ssl.engine.factory.class = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.403+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/onap/policy/clamp/acm/policyparticipant' 17:04:40 policy-db-migrator | 17:04:40 kafka | group.coordinator.new.enable = false 17:04:40 policy-apex-pdp | reconnect.backoff.ms = 50 17:04:40 policy-pap | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.198+00:00|INFO|ServiceManager|main] service manager starting Topic endpoint management 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.201+00:00|INFO|ServiceManager|main] service manager starting 17:04:40 policy-clamp-runtime-acm | ssl.key.password = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.429+00:00|INFO|ServiceManager|main] service manager starting 17:04:40 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 17:04:40 kafka | group.coordinator.threads = 1 17:04:40 policy-apex-pdp | request.timeout.ms = 30000 17:04:40 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.207+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=32e809a3-a7c0-4e13-b7a3-aa811059e0bc, consumerInstance=policy-clamp-ac-http-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.201+00:00|INFO|ServiceManager|main] service manager starting Topic endpoint management 17:04:40 policy-clamp-runtime-acm | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.430+00:00|INFO|ServiceManager|main] service manager starting Topic endpoint management 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | group.initial.rebalance.delay.ms = 3000 17:04:40 policy-apex-pdp | retry.backoff.ms = 100 17:04:40 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.239+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.224+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6a2107c9-1f65-47c8-af5c-8c5cc7111397, consumerInstance=policy-clamp-ac-sim-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 17:04:40 policy-clamp-runtime-acm | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.437+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=97317da4-3ba6-4109-8e73-20dc2312d257, consumerInstance=policy-clamp-ac-pf-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 kafka | group.max.session.timeout.ms = 1800000 17:04:40 policy-apex-pdp | sasl.client.callback.handler.class = null 17:04:40 policy-pap | ssl.engine.factory.class = null 17:04:40 policy-clamp-ac-http-ppnt | allow.auto.create.topics = true 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.297+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-clamp-runtime-acm | ssl.keystore.key = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.462+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | group.max.size = 2147483647 17:04:40 policy-apex-pdp | sasl.jaas.config = null 17:04:40 policy-pap | ssl.key.password = null 17:04:40 policy-clamp-ac-http-ppnt | auto.commit.interval.ms = 5000 17:04:40 policy-clamp-ac-sim-ppnt | allow.auto.create.topics = true 17:04:40 policy-clamp-runtime-acm | ssl.keystore.location = null 17:04:40 policy-clamp-ac-pf-ppnt | allow.auto.create.topics = true 17:04:40 policy-db-migrator | 17:04:40 kafka | group.min.session.timeout.ms = 6000 17:04:40 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-ac-http-ppnt | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-sim-ppnt | auto.commit.interval.ms = 5000 17:04:40 policy-clamp-runtime-acm | ssl.keystore.password = null 17:04:40 policy-clamp-ac-pf-ppnt | auto.commit.interval.ms = 5000 17:04:40 policy-db-migrator | 17:04:40 kafka | initial.broker.registration.timeout.ms = 60000 17:04:40 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-pap | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-ac-http-ppnt | auto.offset.reset = latest 17:04:40 policy-clamp-ac-sim-ppnt | auto.include.jmx.reporter = true 17:04:40 policy-clamp-runtime-acm | ssl.keystore.type = JKS 17:04:40 policy-clamp-ac-pf-ppnt | auto.include.jmx.reporter = true 17:04:40 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 17:04:40 kafka | inter.broker.listener.name = PLAINTEXT 17:04:40 policy-apex-pdp | sasl.kerberos.service.name = null 17:04:40 policy-pap | ssl.keystore.key = null 17:04:40 policy-clamp-ac-http-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-sim-ppnt | auto.offset.reset = latest 17:04:40 policy-clamp-runtime-acm | ssl.protocol = TLSv1.3 17:04:40 policy-clamp-ac-pf-ppnt | auto.offset.reset = latest 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | inter.broker.protocol.version = 3.6-IV2 17:04:40 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-pap | ssl.keystore.location = null 17:04:40 policy-clamp-ac-http-ppnt | check.crcs = true 17:04:40 policy-clamp-ac-sim-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-runtime-acm | ssl.provider = null 17:04:40 policy-clamp-ac-pf-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 kafka | kafka.metrics.polling.interval.secs = 10 17:04:40 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-pap | ssl.keystore.password = null 17:04:40 policy-clamp-ac-http-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-sim-ppnt | check.crcs = true 17:04:40 policy-clamp-runtime-acm | ssl.secure.random.implementation = null 17:04:40 policy-clamp-ac-pf-ppnt | check.crcs = true 17:04:40 kafka | kafka.metrics.reporters = [] 17:04:40 policy-apex-pdp | sasl.login.callback.handler.class = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.keystore.type = JKS 17:04:40 policy-clamp-ac-http-ppnt | client.id = consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2 17:04:40 policy-clamp-ac-sim-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-runtime-acm | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-ac-pf-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 kafka | leader.imbalance.check.interval.seconds = 300 17:04:40 policy-apex-pdp | sasl.login.class = null 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.protocol = TLSv1.3 17:04:40 policy-clamp-ac-http-ppnt | client.rack = 17:04:40 policy-clamp-ac-sim-ppnt | client.id = consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2 17:04:40 policy-clamp-runtime-acm | ssl.truststore.certificates = null 17:04:40 policy-clamp-ac-pf-ppnt | client.id = consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2 17:04:40 kafka | leader.imbalance.per.broker.percentage = 10 17:04:40 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.provider = null 17:04:40 policy-clamp-ac-http-ppnt | connections.max.idle.ms = 540000 17:04:40 policy-clamp-ac-sim-ppnt | client.rack = 17:04:40 policy-clamp-runtime-acm | ssl.truststore.location = null 17:04:40 policy-clamp-ac-pf-ppnt | client.rack = 17:04:40 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 17:04:40 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:04:40 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 17:04:40 policy-pap | ssl.secure.random.implementation = null 17:04:40 policy-clamp-ac-http-ppnt | default.api.timeout.ms = 60000 17:04:40 policy-clamp-ac-sim-ppnt | connections.max.idle.ms = 540000 17:04:40 policy-clamp-runtime-acm | ssl.truststore.password = null 17:04:40 policy-clamp-ac-pf-ppnt | connections.max.idle.ms = 540000 17:04:40 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 17:04:40 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-ac-http-ppnt | enable.auto.commit = true 17:04:40 policy-clamp-ac-sim-ppnt | default.api.timeout.ms = 60000 17:04:40 policy-clamp-runtime-acm | ssl.truststore.type = JKS 17:04:40 policy-clamp-ac-pf-ppnt | default.api.timeout.ms = 60000 17:04:40 kafka | log.cleaner.backoff.ms = 15000 17:04:40 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 policy-pap | ssl.truststore.certificates = null 17:04:40 policy-clamp-ac-http-ppnt | exclude.internal.topics = true 17:04:40 policy-clamp-ac-sim-ppnt | enable.auto.commit = true 17:04:40 policy-clamp-runtime-acm | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-pf-ppnt | enable.auto.commit = true 17:04:40 kafka | log.cleaner.dedupe.buffer.size = 134217728 17:04:40 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.truststore.location = null 17:04:40 policy-clamp-ac-http-ppnt | fetch.max.bytes = 52428800 17:04:40 policy-clamp-ac-sim-ppnt | exclude.internal.topics = true 17:04:40 policy-clamp-runtime-acm | 17:04:40 policy-clamp-ac-pf-ppnt | exclude.internal.topics = true 17:04:40 kafka | log.cleaner.delete.retention.ms = 86400000 17:04:40 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.truststore.password = null 17:04:40 policy-clamp-ac-http-ppnt | fetch.max.wait.ms = 500 17:04:40 policy-clamp-ac-sim-ppnt | fetch.max.bytes = 52428800 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:39.036+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-ac-pf-ppnt | fetch.max.bytes = 52428800 17:04:40 kafka | log.cleaner.enable = true 17:04:40 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.truststore.type = JKS 17:04:40 policy-clamp-ac-http-ppnt | fetch.min.bytes = 1 17:04:40 policy-clamp-ac-sim-ppnt | fetch.max.wait.ms = 500 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:39.036+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-ac-pf-ppnt | fetch.max.wait.ms = 500 17:04:40 kafka | log.cleaner.io.buffer.load.factor = 0.9 17:04:40 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:04:40 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 17:04:40 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-http-ppnt | group.id = 32e809a3-a7c0-4e13-b7a3-aa811059e0bc 17:04:40 policy-clamp-ac-sim-ppnt | fetch.min.bytes = 1 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:39.036+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103019034 17:04:40 policy-clamp-ac-pf-ppnt | fetch.min.bytes = 1 17:04:40 kafka | log.cleaner.io.buffer.size = 524288 17:04:40 policy-apex-pdp | sasl.mechanism = GSSAPI 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | 17:04:40 policy-clamp-ac-http-ppnt | group.instance.id = null 17:04:40 policy-clamp-ac-sim-ppnt | group.id = 6a2107c9-1f65-47c8-af5c-8c5cc7111397 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:39.039+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-1, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Subscribed to topic(s): policy-acruntime-participant 17:04:40 policy-clamp-ac-pf-ppnt | group.id = 97317da4-3ba6-4109-8e73-20dc2312d257 17:04:40 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 17:04:40 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 17:04:40 policy-pap | [2024-02-16T17:03:23.752+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-ac-http-ppnt | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-ac-sim-ppnt | group.instance.id = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:39.256+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:04:40 policy-clamp-ac-pf-ppnt | group.instance.id = null 17:04:40 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 17:04:40 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:23.752+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-ac-http-ppnt | interceptor.classes = [] 17:04:40 policy-clamp-ac-sim-ppnt | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:39.602+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2e929182, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5423a17, org.springframework.security.web.context.SecurityContextHolderFilter@7c0de229, org.springframework.security.web.header.HeaderWriterFilter@2c7ad4f3, org.springframework.security.web.authentication.logout.LogoutFilter@4b343b6d, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3a2bb026, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@4756e5cc, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3dbb3fb7, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@42ff9a77, org.springframework.security.web.access.ExceptionTranslationFilter@36525ab, org.springframework.security.web.access.intercept.AuthorizationFilter@352e5a82] 17:04:40 policy-clamp-ac-pf-ppnt | heartbeat.interval.ms = 3000 17:04:40 kafka | log.cleaner.min.cleanable.ratio = 0.5 17:04:40 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:23.752+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103003750 17:04:40 policy-clamp-ac-http-ppnt | internal.leave.group.on.close = true 17:04:40 policy-clamp-ac-sim-ppnt | interceptor.classes = [] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:39.605+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.runtime.config.MetricsConfiguration 17:04:40 policy-clamp-ac-pf-ppnt | interceptor.classes = [] 17:04:40 kafka | log.cleaner.min.compaction.lag.ms = 0 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:23.755+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-1, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Subscribed to topic(s): policy-pdp-pap 17:04:40 policy-clamp-ac-http-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-ac-sim-ppnt | internal.leave.group.on.close = true 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:40.821+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:04:40 policy-clamp-ac-pf-ppnt | internal.leave.group.on.close = true 17:04:40 kafka | log.cleaner.threads = 1 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 17:04:40 policy-pap | [2024-02-16T17:03:23.756+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-clamp-ac-http-ppnt | isolation.level = read_uncommitted 17:04:40 policy-clamp-ac-sim-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:40.974+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:04:40 policy-clamp-ac-pf-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 kafka | log.cleanup.policy = [delete] 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | allow.auto.create.topics = true 17:04:40 policy-clamp-ac-http-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-sim-ppnt | isolation.level = read_uncommitted 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:40.999+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/onap/policy/clamp/acm' 17:04:40 policy-clamp-ac-pf-ppnt | isolation.level = read_uncommitted 17:04:40 kafka | log.dir = /tmp/kafka-logs 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-pap | auto.commit.interval.ms = 5000 17:04:40 policy-clamp-ac-http-ppnt | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-ac-sim-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.023+00:00|INFO|ServiceManager|main] service manager starting 17:04:40 policy-clamp-ac-pf-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 kafka | log.dirs = /var/lib/kafka/data 17:04:40 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-http-ppnt | max.poll.interval.ms = 300000 17:04:40 policy-clamp-ac-sim-ppnt | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-ac-pf-ppnt | max.partition.fetch.bytes = 1048576 17:04:40 kafka | log.flush.interval.messages = 9223372036854775807 17:04:40 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-db-migrator | 17:04:40 policy-pap | auto.offset.reset = latest 17:04:40 policy-clamp-ac-http-ppnt | max.poll.records = 500 17:04:40 policy-clamp-ac-sim-ppnt | max.poll.interval.ms = 300000 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.023+00:00|INFO|ServiceManager|main] service manager starting Topic endpoint management 17:04:40 policy-clamp-ac-pf-ppnt | max.poll.interval.ms = 300000 17:04:40 kafka | log.flush.interval.ms = null 17:04:40 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-db-migrator | 17:04:40 policy-pap | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-http-ppnt | metadata.max.age.ms = 300000 17:04:40 policy-clamp-ac-sim-ppnt | max.poll.records = 500 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.027+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0b0f93e1-9727-45a5-b97d-714a24b64a62, consumerInstance=policy-clamp-runtime-acm, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 17:04:40 policy-clamp-ac-pf-ppnt | max.poll.records = 500 17:04:40 kafka | log.flush.offset.checkpoint.interval.ms = 60000 17:04:40 policy-apex-pdp | security.protocol = PLAINTEXT 17:04:40 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 17:04:40 policy-pap | check.crcs = true 17:04:40 policy-clamp-ac-http-ppnt | metric.reporters = [] 17:04:40 policy-clamp-ac-sim-ppnt | metadata.max.age.ms = 300000 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.040+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-clamp-ac-pf-ppnt | metadata.max.age.ms = 300000 17:04:40 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 17:04:40 policy-apex-pdp | security.providers = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-http-ppnt | metrics.num.samples = 2 17:04:40 policy-clamp-ac-sim-ppnt | metric.reporters = [] 17:04:40 policy-clamp-runtime-acm | allow.auto.create.topics = true 17:04:40 policy-clamp-ac-pf-ppnt | metric.reporters = [] 17:04:40 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 17:04:40 policy-apex-pdp | send.buffer.bytes = 131072 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 policy-pap | client.id = consumer-policy-pap-2 17:04:40 policy-clamp-ac-http-ppnt | metrics.recording.level = INFO 17:04:40 policy-clamp-ac-sim-ppnt | metrics.num.samples = 2 17:04:40 policy-clamp-runtime-acm | auto.commit.interval.ms = 5000 17:04:40 policy-clamp-ac-pf-ppnt | metrics.num.samples = 2 17:04:40 kafka | log.index.interval.bytes = 4096 17:04:40 policy-apex-pdp | session.timeout.ms = 45000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | client.rack = 17:04:40 policy-clamp-ac-http-ppnt | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | metrics.recording.level = INFO 17:04:40 policy-clamp-runtime-acm | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-pf-ppnt | metrics.recording.level = INFO 17:04:40 kafka | log.index.size.max.bytes = 10485760 17:04:40 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-db-migrator | 17:04:40 policy-pap | connections.max.idle.ms = 540000 17:04:40 policy-clamp-ac-http-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-clamp-ac-sim-ppnt | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-runtime-acm | auto.offset.reset = latest 17:04:40 policy-clamp-ac-pf-ppnt | metrics.sample.window.ms = 30000 17:04:40 kafka | log.local.retention.bytes = -2 17:04:40 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-db-migrator | 17:04:40 policy-pap | default.api.timeout.ms = 60000 17:04:40 policy-clamp-ac-http-ppnt | receive.buffer.bytes = 65536 17:04:40 policy-clamp-ac-sim-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-clamp-runtime-acm | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-pf-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 kafka | log.local.retention.ms = -2 17:04:40 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 17:04:40 policy-pap | enable.auto.commit = true 17:04:40 policy-clamp-ac-http-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-ac-sim-ppnt | receive.buffer.bytes = 65536 17:04:40 policy-clamp-runtime-acm | check.crcs = true 17:04:40 policy-clamp-ac-pf-ppnt | receive.buffer.bytes = 65536 17:04:40 policy-apex-pdp | ssl.cipher.suites = null 17:04:40 kafka | log.message.downconversion.enable = true 17:04:40 policy-pap | exclude.internal.topics = true 17:04:40 policy-clamp-ac-http-ppnt | reconnect.backoff.ms = 50 17:04:40 policy-clamp-ac-sim-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-runtime-acm | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-pf-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | log.message.format.version = 3.0-IV1 17:04:40 policy-pap | fetch.max.bytes = 52428800 17:04:40 policy-clamp-ac-http-ppnt | request.timeout.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | reconnect.backoff.ms = 50 17:04:40 policy-clamp-runtime-acm | client.id = consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2 17:04:40 policy-clamp-ac-pf-ppnt | reconnect.backoff.ms = 50 17:04:40 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 17:04:40 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 17:04:40 policy-pap | fetch.max.wait.ms = 500 17:04:40 policy-clamp-ac-http-ppnt | retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | request.timeout.ms = 30000 17:04:40 policy-clamp-runtime-acm | client.rack = 17:04:40 policy-clamp-ac-pf-ppnt | request.timeout.ms = 30000 17:04:40 policy-apex-pdp | ssl.engine.factory.class = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 17:04:40 policy-pap | fetch.min.bytes = 1 17:04:40 policy-clamp-ac-http-ppnt | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-ac-sim-ppnt | retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | connections.max.idle.ms = 540000 17:04:40 policy-clamp-ac-pf-ppnt | retry.backoff.ms = 100 17:04:40 policy-apex-pdp | ssl.key.password = null 17:04:40 policy-db-migrator | 17:04:40 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 17:04:40 policy-pap | group.id = policy-pap 17:04:40 policy-clamp-ac-http-ppnt | sasl.jaas.config = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-runtime-acm | default.api.timeout.ms = 60000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.client.callback.handler.class = null 17:04:40 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:04:40 policy-db-migrator | 17:04:40 kafka | log.message.timestamp.type = CreateTime 17:04:40 policy-pap | group.instance.id = null 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-ac-sim-ppnt | sasl.jaas.config = null 17:04:40 policy-clamp-runtime-acm | enable.auto.commit = true 17:04:40 policy-clamp-ac-pf-ppnt | sasl.jaas.config = null 17:04:40 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:04:40 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 17:04:40 kafka | log.preallocate = false 17:04:40 policy-pap | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-runtime-acm | exclude.internal.topics = true 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-apex-pdp | ssl.keystore.key = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | log.retention.bytes = -1 17:04:40 policy-pap | interceptor.classes = [] 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.service.name = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-runtime-acm | fetch.max.bytes = 52428800 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-apex-pdp | ssl.keystore.location = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 17:04:40 kafka | log.retention.check.interval.ms = 300000 17:04:40 policy-pap | internal.leave.group.on.close = true 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.service.name = null 17:04:40 policy-clamp-runtime-acm | fetch.max.wait.ms = 500 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.service.name = null 17:04:40 policy-apex-pdp | ssl.keystore.password = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | log.retention.hours = 168 17:04:40 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-runtime-acm | fetch.min.bytes = 1 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-apex-pdp | ssl.keystore.type = JKS 17:04:40 policy-db-migrator | 17:04:40 kafka | log.retention.minutes = null 17:04:40 policy-pap | isolation.level = read_uncommitted 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-runtime-acm | group.id = 0b0f93e1-9727-45a5-b97d-714a24b64a62 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-apex-pdp | ssl.protocol = TLSv1.3 17:04:40 policy-db-migrator | 17:04:40 kafka | log.retention.ms = null 17:04:40 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.class = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-runtime-acm | group.instance.id = null 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.callback.handler.class = null 17:04:40 policy-apex-pdp | ssl.provider = null 17:04:40 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 17:04:40 kafka | log.roll.hours = 168 17:04:40 policy-pap | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.class = null 17:04:40 policy-clamp-runtime-acm | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.class = null 17:04:40 policy-apex-pdp | ssl.secure.random.implementation = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | log.roll.jitter.hours = 0 17:04:40 policy-pap | max.poll.interval.ms = 300000 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-runtime-acm | interceptor.classes = [] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 17:04:40 kafka | log.roll.jitter.ms = null 17:04:40 policy-pap | max.poll.records = 500 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-runtime-acm | internal.leave.group.on.close = true 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.read.timeout.ms = null 17:04:40 policy-apex-pdp | ssl.truststore.certificates = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | log.roll.ms = null 17:04:40 policy-pap | metadata.max.age.ms = 300000 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-runtime-acm | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-apex-pdp | ssl.truststore.location = null 17:04:40 policy-db-migrator | 17:04:40 kafka | log.segment.bytes = 1073741824 17:04:40 policy-pap | metric.reporters = [] 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-runtime-acm | isolation.level = read_uncommitted 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-apex-pdp | ssl.truststore.password = null 17:04:40 policy-db-migrator | 17:04:40 kafka | log.segment.delete.delay.ms = 60000 17:04:40 policy-pap | metrics.num.samples = 2 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-runtime-acm | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-apex-pdp | ssl.truststore.type = JKS 17:04:40 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 17:04:40 kafka | max.connection.creation.rate = 2147483647 17:04:40 kafka | max.connections = 2147483647 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-runtime-acm | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | metrics.recording.level = INFO 17:04:40 kafka | max.connections.per.ip = 2147483647 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-runtime-acm | max.poll.interval.ms = 300000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-apex-pdp | 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-pap | metrics.sample.window.ms = 30000 17:04:40 kafka | max.connections.per.ip.overrides = 17:04:40 policy-clamp-ac-http-ppnt | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | max.poll.records = 500 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.433+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 kafka | max.incremental.fetch.session.cache.slots = 1000 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-ac-sim-ppnt | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-runtime-acm | metadata.max.age.ms = 300000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.mechanism = GSSAPI 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.434+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-db-migrator | 17:04:40 policy-pap | receive.buffer.bytes = 65536 17:04:40 kafka | message.max.bytes = 1048588 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-runtime-acm | metric.reporters = [] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.434+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103007433 17:04:40 policy-db-migrator | 17:04:40 policy-pap | reconnect.backoff.max.ms = 1000 17:04:40 kafka | metadata.log.dir = null 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-runtime-acm | metrics.num.samples = 2 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.434+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Subscribed to topic(s): policy-pdp-pap 17:04:40 policy-pap | reconnect.backoff.ms = 50 17:04:40 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-runtime-acm | metrics.recording.level = INFO 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.435+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=87c9230a-bdff-4a83-91ce-7ad113bd23a0, alive=false, publisher=null]]: starting 17:04:40 policy-pap | request.timeout.ms = 30000 17:04:40 kafka | metadata.log.max.snapshot.interval.ms = 3600000 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-runtime-acm | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.452+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:40 policy-pap | retry.backoff.ms = 100 17:04:40 kafka | metadata.log.segment.bytes = 1073741824 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-runtime-acm | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-apex-pdp | acks = -1 17:04:40 policy-pap | sasl.client.callback.handler.class = null 17:04:40 kafka | metadata.log.segment.min.bytes = 8388608 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | receive.buffer.bytes = 65536 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | auto.include.jmx.reporter = true 17:04:40 policy-pap | sasl.jaas.config = null 17:04:40 kafka | metadata.log.segment.ms = 604800000 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-runtime-acm | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | batch.size = 16384 17:04:40 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 kafka | metadata.max.idle.interval.ms = 500 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-clamp-runtime-acm | reconnect.backoff.ms = 50 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:04:40 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 kafka | metadata.max.retention.bytes = 104857600 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-runtime-acm | request.timeout.ms = 30000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 17:04:40 policy-apex-pdp | buffer.memory = 33554432 17:04:40 policy-pap | sasl.kerberos.service.name = null 17:04:40 kafka | metadata.max.retention.ms = 604800000 17:04:40 policy-clamp-ac-http-ppnt | security.protocol = PLAINTEXT 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-clamp-runtime-acm | retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:04:40 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 kafka | metric.reporters = [] 17:04:40 policy-clamp-ac-http-ppnt | security.providers = null 17:04:40 policy-clamp-ac-sim-ppnt | security.protocol = PLAINTEXT 17:04:40 policy-clamp-runtime-acm | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-ac-pf-ppnt | security.protocol = PLAINTEXT 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 policy-apex-pdp | client.id = producer-1 17:04:40 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 kafka | metrics.num.samples = 2 17:04:40 policy-clamp-ac-http-ppnt | send.buffer.bytes = 131072 17:04:40 policy-clamp-ac-sim-ppnt | security.providers = null 17:04:40 policy-clamp-runtime-acm | sasl.jaas.config = null 17:04:40 policy-clamp-ac-pf-ppnt | security.providers = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | compression.type = none 17:04:40 policy-pap | sasl.login.callback.handler.class = null 17:04:40 kafka | metrics.recording.level = INFO 17:04:40 policy-clamp-ac-http-ppnt | session.timeout.ms = 45000 17:04:40 policy-clamp-ac-sim-ppnt | send.buffer.bytes = 131072 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-ac-pf-ppnt | send.buffer.bytes = 131072 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.login.class = null 17:04:40 kafka | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | session.timeout.ms = 45000 17:04:40 policy-clamp-ac-pf-ppnt | session.timeout.ms = 45000 17:04:40 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 17:04:40 policy-apex-pdp | connections.max.idle.ms = 540000 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-pap | sasl.login.connect.timeout.ms = null 17:04:40 kafka | min.insync.replicas = 1 17:04:40 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | delivery.timeout.ms = 120000 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.service.name = null 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 kafka | node.id = 1 17:04:40 policy-clamp-ac-http-ppnt | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-apex-pdp | enable.idempotence = true 17:04:40 policy-pap | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 kafka | num.io.threads = 8 17:04:40 policy-clamp-ac-http-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-ac-sim-ppnt | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.cipher.suites = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | interceptor.classes = [] 17:04:40 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-runtime-acm | sasl.login.callback.handler.class = null 17:04:40 kafka | num.network.threads = 3 17:04:40 policy-clamp-ac-http-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-ac-sim-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-ac-pf-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-runtime-acm | sasl.login.class = null 17:04:40 policy-clamp-ac-http-ppnt | ssl.engine.factory.class = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 kafka | num.partitions = 1 17:04:40 policy-clamp-ac-pf-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | linger.ms = 0 17:04:40 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-runtime-acm | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-http-ppnt | ssl.key.password = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.engine.factory.class = null 17:04:40 kafka | num.recovery.threads.per.data.dir = 1 17:04:40 policy-clamp-ac-pf-ppnt | ssl.engine.factory.class = null 17:04:40 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 17:04:40 policy-apex-pdp | max.block.ms = 60000 17:04:40 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-runtime-acm | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-ac-http-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-ac-sim-ppnt | ssl.key.password = null 17:04:40 kafka | num.replica.alter.log.dirs.threads = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.key.password = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | max.in.flight.requests.per.connection = 5 17:04:40 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 kafka | num.replica.fetchers = 1 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 17:04:40 policy-apex-pdp | max.request.size = 1048576 17:04:40 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.key = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.certificate.chain = null 17:04:40 kafka | offset.metadata.max.bytes = 4096 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.certificate.chain = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | metadata.max.age.ms = 300000 17:04:40 policy-pap | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.location = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.key = null 17:04:40 kafka | offsets.commit.required.acks = -1 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.key = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | metadata.max.idle.ms = 300000 17:04:40 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.password = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.location = null 17:04:40 kafka | offsets.commit.timeout.ms = 5000 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.location = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | metric.reporters = [] 17:04:40 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-runtime-acm | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.type = JKS 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.password = null 17:04:40 kafka | offsets.load.buffer.size = 5242880 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.password = null 17:04:40 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 17:04:40 policy-apex-pdp | metrics.num.samples = 2 17:04:40 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-runtime-acm | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-http-ppnt | ssl.protocol = TLSv1.3 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.type = JKS 17:04:40 kafka | offsets.retention.check.interval.ms = 600000 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.type = JKS 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | metrics.recording.level = INFO 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-runtime-acm | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-ac-http-ppnt | ssl.provider = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.protocol = TLSv1.3 17:04:40 kafka | offsets.retention.minutes = 10080 17:04:40 policy-clamp-ac-pf-ppnt | ssl.protocol = TLSv1.3 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:40 policy-apex-pdp | metrics.sample.window.ms = 30000 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-ac-http-ppnt | ssl.secure.random.implementation = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.provider = null 17:04:40 kafka | offsets.topic.compression.codec = 0 17:04:40 policy-clamp-ac-pf-ppnt | ssl.provider = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-ac-http-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-ac-sim-ppnt | ssl.secure.random.implementation = null 17:04:40 kafka | offsets.topic.num.partitions = 50 17:04:40 policy-clamp-ac-pf-ppnt | ssl.secure.random.implementation = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | partitioner.availability.timeout.ms = 0 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.certificates = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.certificates = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | partitioner.class = null 17:04:40 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.location = null 17:04:40 kafka | offsets.topic.replication.factor = 1 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.location = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.certificates = null 17:04:40 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 17:04:40 policy-apex-pdp | partitioner.ignore.keys = false 17:04:40 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-pap | security.protocol = PLAINTEXT 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.password = null 17:04:40 kafka | offsets.topic.segment.bytes = 104857600 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.password = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.location = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | receive.buffer.bytes = 32768 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-pap | security.providers = null 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.type = JKS 17:04:40 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.type = JKS 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.password = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:40 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-pap | send.buffer.bytes = 131072 17:04:40 policy-clamp-ac-http-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 kafka | password.encoder.iterations = 4096 17:04:40 policy-clamp-ac-sim-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.type = JKS 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | reconnect.backoff.ms = 50 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-pap | session.timeout.ms = 45000 17:04:40 policy-clamp-ac-http-ppnt | 17:04:40 kafka | password.encoder.key.length = 128 17:04:40 policy-clamp-ac-sim-ppnt | 17:04:40 policy-clamp-ac-pf-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | request.timeout.ms = 30000 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.250+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 kafka | password.encoder.keyfactory.algorithm = null 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.306+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-ac-pf-ppnt | 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | retries = 2147483647 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.250+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 kafka | password.encoder.old.secret = null 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.306+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.480+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 17:04:40 policy-apex-pdp | retry.backoff.ms = 100 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-pap | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.250+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102973250 17:04:40 kafka | password.encoder.secret = null 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.306+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102969306 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.480+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.252+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Subscribed to topic(s): policy-acruntime-participant 17:04:40 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.306+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Subscribed to topic(s): policy-acruntime-participant 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.481+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102998480 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 17:04:40 policy-apex-pdp | sasl.jaas.config = null 17:04:40 policy-clamp-runtime-acm | security.protocol = PLAINTEXT 17:04:40 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.262+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a2a67dd7-036e-47bf-8bb4-b8ac84a561a1, alive=false, publisher=null]]: starting 17:04:40 kafka | process.roles = [] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.307+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9f378aa3-8f61-4875-a93d-dde5000eb5f3, alive=false, publisher=null]]: starting 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.482+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Subscribed to topic(s): policy-acruntime-participant 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-runtime-acm | security.providers = null 17:04:40 policy-pap | ssl.engine.factory.class = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.320+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:40 kafka | producer.id.expiration.check.interval.ms = 600000 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.419+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.483+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=88c2f06c-c9a4-45b3-918b-942592d06e7b, alive=false, publisher=null]]: starting 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-runtime-acm | send.buffer.bytes = 131072 17:04:40 policy-pap | ssl.key.password = null 17:04:40 policy-clamp-ac-http-ppnt | acks = -1 17:04:40 kafka | producer.id.expiration.ms = 86400000 17:04:40 policy-clamp-ac-sim-ppnt | acks = -1 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.520+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | sasl.kerberos.service.name = null 17:04:40 policy-clamp-runtime-acm | session.timeout.ms = 45000 17:04:40 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-ac-http-ppnt | auto.include.jmx.reporter = true 17:04:40 kafka | producer.purgatory.purge.interval.requests = 1000 17:04:40 policy-clamp-ac-sim-ppnt | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-pf-ppnt | acks = -1 17:04:40 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 17:04:40 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-runtime-acm | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-pap | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-ac-http-ppnt | batch.size = 16384 17:04:40 kafka | queued.max.request.bytes = -1 17:04:40 policy-clamp-ac-sim-ppnt | batch.size = 16384 17:04:40 policy-clamp-ac-pf-ppnt | auto.include.jmx.reporter = true 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-runtime-acm | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-pap | ssl.keystore.key = null 17:04:40 policy-clamp-ac-http-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 kafka | queued.max.requests = 500 17:04:40 policy-clamp-ac-sim-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-pf-ppnt | batch.size = 16384 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-apex-pdp | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-runtime-acm | ssl.cipher.suites = null 17:04:40 policy-pap | ssl.keystore.location = null 17:04:40 policy-clamp-ac-http-ppnt | buffer.memory = 33554432 17:04:40 kafka | quota.window.num = 11 17:04:40 policy-clamp-ac-sim-ppnt | buffer.memory = 33554432 17:04:40 policy-clamp-ac-pf-ppnt | bootstrap.servers = [kafka:9092] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | sasl.login.class = null 17:04:40 policy-clamp-runtime-acm | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-pap | ssl.keystore.password = null 17:04:40 policy-clamp-ac-http-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 kafka | quota.window.size.seconds = 1 17:04:40 policy-clamp-ac-sim-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-pf-ppnt | buffer.memory = 33554432 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-runtime-acm | ssl.endpoint.identification.algorithm = https 17:04:40 policy-pap | ssl.keystore.type = JKS 17:04:40 policy-clamp-ac-http-ppnt | client.id = producer-1 17:04:40 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 17:04:40 policy-clamp-ac-sim-ppnt | client.id = producer-1 17:04:40 policy-clamp-ac-pf-ppnt | client.dns.lookup = use_all_dns_ips 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-runtime-acm | ssl.engine.factory.class = null 17:04:40 policy-pap | ssl.protocol = TLSv1.3 17:04:40 policy-clamp-ac-http-ppnt | compression.type = none 17:04:40 kafka | remote.log.manager.task.interval.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | compression.type = none 17:04:40 policy-clamp-ac-pf-ppnt | client.id = producer-1 17:04:40 policy-db-migrator | > upgrade 0450-pdpgroup.sql 17:04:40 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-runtime-acm | ssl.key.password = null 17:04:40 policy-pap | ssl.provider = null 17:04:40 policy-clamp-ac-http-ppnt | connections.max.idle.ms = 540000 17:04:40 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | connections.max.idle.ms = 540000 17:04:40 policy-clamp-ac-pf-ppnt | compression.type = none 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-runtime-acm | ssl.keymanager.algorithm = SunX509 17:04:40 policy-pap | ssl.secure.random.implementation = null 17:04:40 policy-clamp-ac-http-ppnt | delivery.timeout.ms = 120000 17:04:40 kafka | remote.log.manager.task.retry.backoff.ms = 500 17:04:40 policy-clamp-ac-sim-ppnt | delivery.timeout.ms = 120000 17:04:40 policy-clamp-ac-sim-ppnt | enable.idempotence = true 17:04:40 policy-clamp-ac-pf-ppnt | connections.max.idle.ms = 540000 17:04:40 policy-clamp-ac-pf-ppnt | delivery.timeout.ms = 120000 17:04:40 policy-clamp-runtime-acm | ssl.keystore.certificate.chain = null 17:04:40 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-ac-http-ppnt | enable.idempotence = true 17:04:40 kafka | remote.log.manager.task.retry.jitter = 0.2 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 17:04:40 policy-clamp-ac-sim-ppnt | interceptor.classes = [] 17:04:40 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-pf-ppnt | enable.idempotence = true 17:04:40 policy-clamp-runtime-acm | ssl.keystore.key = null 17:04:40 policy-pap | ssl.truststore.certificates = null 17:04:40 policy-clamp-ac-http-ppnt | interceptor.classes = [] 17:04:40 kafka | remote.log.manager.thread.pool.size = 10 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-ac-pf-ppnt | interceptor.classes = [] 17:04:40 policy-clamp-runtime-acm | ssl.keystore.location = null 17:04:40 policy-pap | ssl.truststore.location = null 17:04:40 policy-clamp-ac-http-ppnt | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | linger.ms = 0 17:04:40 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-pf-ppnt | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 policy-clamp-runtime-acm | ssl.keystore.password = null 17:04:40 policy-pap | ssl.truststore.password = null 17:04:40 policy-clamp-ac-http-ppnt | linger.ms = 0 17:04:40 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | linger.ms = 0 17:04:40 policy-clamp-ac-sim-ppnt | max.block.ms = 60000 17:04:40 policy-clamp-runtime-acm | ssl.keystore.type = JKS 17:04:40 policy-pap | ssl.truststore.type = JKS 17:04:40 kafka | remote.log.metadata.manager.class.path = null 17:04:40 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 17:04:40 policy-clamp-ac-http-ppnt | max.block.ms = 60000 17:04:40 policy-apex-pdp | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-ac-pf-ppnt | max.block.ms = 60000 17:04:40 policy-clamp-ac-sim-ppnt | max.in.flight.requests.per.connection = 5 17:04:40 policy-clamp-runtime-acm | ssl.protocol = TLSv1.3 17:04:40 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | max.in.flight.requests.per.connection = 5 17:04:40 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-ac-sim-ppnt | max.request.size = 1048576 17:04:40 policy-clamp-runtime-acm | ssl.provider = null 17:04:40 policy-clamp-ac-pf-ppnt | max.in.flight.requests.per.connection = 5 17:04:40 policy-pap | 17:04:40 kafka | remote.log.metadata.manager.listener.name = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-clamp-ac-http-ppnt | max.request.size = 1048576 17:04:40 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-ac-sim-ppnt | metadata.max.age.ms = 300000 17:04:40 policy-clamp-runtime-acm | ssl.secure.random.implementation = null 17:04:40 policy-clamp-ac-pf-ppnt | max.request.size = 1048576 17:04:40 policy-pap | [2024-02-16T17:03:23.762+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 kafka | remote.log.reader.max.pending.tasks = 100 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | metadata.max.age.ms = 300000 17:04:40 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-sim-ppnt | metadata.max.idle.ms = 300000 17:04:40 policy-clamp-runtime-acm | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-ac-pf-ppnt | metadata.max.age.ms = 300000 17:04:40 policy-pap | [2024-02-16T17:03:23.762+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 kafka | remote.log.reader.threads = 10 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | metadata.max.idle.ms = 300000 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-ac-sim-ppnt | metric.reporters = [] 17:04:40 policy-clamp-runtime-acm | ssl.truststore.certificates = null 17:04:40 policy-clamp-ac-pf-ppnt | metadata.max.idle.ms = 300000 17:04:40 policy-pap | [2024-02-16T17:03:23.762+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103003762 17:04:40 kafka | remote.log.storage.manager.class.name = null 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | metric.reporters = [] 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | metrics.num.samples = 2 17:04:40 policy-clamp-runtime-acm | ssl.truststore.location = null 17:04:40 policy-clamp-ac-pf-ppnt | metric.reporters = [] 17:04:40 policy-pap | [2024-02-16T17:03:23.762+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 17:04:40 kafka | remote.log.storage.manager.class.path = null 17:04:40 policy-db-migrator | > upgrade 0470-pdp.sql 17:04:40 policy-clamp-ac-http-ppnt | metrics.num.samples = 2 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | metrics.recording.level = INFO 17:04:40 policy-clamp-runtime-acm | ssl.truststore.password = null 17:04:40 policy-clamp-ac-pf-ppnt | metrics.num.samples = 2 17:04:40 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 17:04:40 policy-pap | [2024-02-16T17:03:24.215+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null), PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null), PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | metrics.recording.level = INFO 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-ac-sim-ppnt | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-runtime-acm | ssl.truststore.type = JKS 17:04:40 policy-clamp-ac-pf-ppnt | metrics.recording.level = INFO 17:04:40 kafka | remote.log.storage.system.enable = false 17:04:40 policy-pap | [2024-02-16T17:03:24.360+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-clamp-ac-http-ppnt | metrics.sample.window.ms = 30000 17:04:40 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-clamp-ac-sim-ppnt | partitioner.adaptive.partitioning.enable = true 17:04:40 policy-clamp-runtime-acm | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-pf-ppnt | metrics.sample.window.ms = 30000 17:04:40 kafka | replica.fetch.backoff.ms = 1000 17:04:40 policy-pap | [2024-02-16T17:03:24.617+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1cdad619, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@319058ce, org.springframework.security.web.context.SecurityContextHolderFilter@1fa796a4, org.springframework.security.web.header.HeaderWriterFilter@3879feec, org.springframework.security.web.authentication.logout.LogoutFilter@259c6ab8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@13018f00, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@8dcacf1, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@73c09a98, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3909308c, org.springframework.security.web.access.ExceptionTranslationFilter@280c3dc0, org.springframework.security.web.access.intercept.AuthorizationFilter@44a9971f] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | partitioner.adaptive.partitioning.enable = true 17:04:40 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-ac-sim-ppnt | partitioner.availability.timeout.ms = 0 17:04:40 policy-clamp-runtime-acm | 17:04:40 policy-clamp-ac-pf-ppnt | partitioner.adaptive.partitioning.enable = true 17:04:40 kafka | replica.fetch.max.bytes = 1048576 17:04:40 policy-pap | [2024-02-16T17:03:25.540+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | partitioner.availability.timeout.ms = 0 17:04:40 policy-clamp-ac-sim-ppnt | partitioner.class = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.047+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-ac-pf-ppnt | partitioner.availability.timeout.ms = 0 17:04:40 kafka | replica.fetch.min.bytes = 1 17:04:40 policy-pap | [2024-02-16T17:03:25.676+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | partitioner.class = null 17:04:40 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-clamp-ac-sim-ppnt | partitioner.ignore.keys = false 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.047+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-ac-pf-ppnt | partitioner.class = null 17:04:40 kafka | replica.fetch.response.max.bytes = 10485760 17:04:40 policy-pap | [2024-02-16T17:03:25.704+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 17:04:40 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 17:04:40 policy-clamp-ac-http-ppnt | partitioner.ignore.keys = false 17:04:40 policy-apex-pdp | security.protocol = PLAINTEXT 17:04:40 policy-apex-pdp | security.providers = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.047+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103021047 17:04:40 policy-clamp-ac-pf-ppnt | partitioner.ignore.keys = false 17:04:40 kafka | replica.fetch.wait.max.ms = 500 17:04:40 policy-pap | [2024-02-16T17:03:25.724+00:00|INFO|ServiceManager|main] Policy PAP starting 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | receive.buffer.bytes = 32768 17:04:40 policy-clamp-ac-sim-ppnt | receive.buffer.bytes = 32768 17:04:40 policy-apex-pdp | send.buffer.bytes = 131072 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.047+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Subscribed to topic(s): policy-acruntime-participant 17:04:40 policy-clamp-ac-pf-ppnt | receive.buffer.bytes = 32768 17:04:40 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 17:04:40 policy-pap | [2024-02-16T17:03:25.724+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 17:04:40 policy-clamp-ac-http-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-ac-sim-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.048+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a1463b30-8108-460a-8cc2-b705ea225556, alive=false, publisher=null]]: starting 17:04:40 policy-clamp-ac-pf-ppnt | reconnect.backoff.max.ms = 1000 17:04:40 kafka | replica.lag.time.max.ms = 30000 17:04:40 policy-pap | [2024-02-16T17:03:25.726+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | reconnect.backoff.ms = 50 17:04:40 policy-clamp-ac-sim-ppnt | reconnect.backoff.ms = 50 17:04:40 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.069+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:40 policy-clamp-ac-pf-ppnt | reconnect.backoff.ms = 50 17:04:40 kafka | replica.selector.class = null 17:04:40 policy-pap | [2024-02-16T17:03:25.726+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | request.timeout.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | request.timeout.ms = 30000 17:04:40 policy-apex-pdp | ssl.cipher.suites = null 17:04:40 policy-clamp-runtime-acm | acks = -1 17:04:40 policy-clamp-ac-pf-ppnt | request.timeout.ms = 30000 17:04:40 kafka | replica.socket.receive.buffer.bytes = 65536 17:04:40 policy-pap | [2024-02-16T17:03:25.726+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | retries = 2147483647 17:04:40 policy-clamp-ac-sim-ppnt | retries = 2147483647 17:04:40 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-runtime-acm | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-pf-ppnt | retries = 2147483647 17:04:40 kafka | replica.socket.timeout.ms = 30000 17:04:40 policy-pap | [2024-02-16T17:03:25.727+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 17:04:40 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 17:04:40 policy-clamp-ac-http-ppnt | retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | retry.backoff.ms = 100 17:04:40 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-runtime-acm | batch.size = 16384 17:04:40 policy-clamp-ac-pf-ppnt | retry.backoff.ms = 100 17:04:40 kafka | replication.quota.window.num = 11 17:04:40 policy-pap | [2024-02-16T17:03:25.727+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.client.callback.handler.class = null 17:04:40 policy-apex-pdp | ssl.engine.factory.class = null 17:04:40 policy-clamp-runtime-acm | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.client.callback.handler.class = null 17:04:40 kafka | replication.quota.window.size.seconds = 1 17:04:40 policy-pap | [2024-02-16T17:03:25.731+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=084a2e58-01c1-4612-9881-9e51d9ffa3ed, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1f235a0a 17:04:40 policy-clamp-ac-http-ppnt | sasl.jaas.config = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.jaas.config = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-apex-pdp | ssl.key.password = null 17:04:40 policy-clamp-runtime-acm | buffer.memory = 33554432 17:04:40 policy-clamp-ac-pf-ppnt | sasl.jaas.config = null 17:04:40 kafka | request.timeout.ms = 30000 17:04:40 policy-pap | [2024-02-16T17:03:25.743+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=084a2e58-01c1-4612-9881-9e51d9ffa3ed, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-runtime-acm | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 kafka | reserved.broker.max.id = 1000 17:04:40 policy-pap | [2024-02-16T17:03:25.744+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-runtime-acm | client.id = producer-1 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 kafka | sasl.client.callback.handler.class = null 17:04:40 policy-pap | allow.auto.create.topics = true 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.service.name = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.service.name = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | ssl.keystore.key = null 17:04:40 policy-clamp-runtime-acm | compression.type = none 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.service.name = null 17:04:40 kafka | sasl.enabled.mechanisms = [GSSAPI] 17:04:40 policy-pap | auto.commit.interval.ms = 5000 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 17:04:40 policy-apex-pdp | ssl.keystore.location = null 17:04:40 policy-clamp-runtime-acm | connections.max.idle.ms = 540000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 kafka | sasl.jaas.config = null 17:04:40 policy-pap | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | ssl.keystore.password = null 17:04:40 policy-clamp-runtime-acm | delivery.timeout.ms = 120000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-pap | auto.offset.reset = latest 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.callback.handler.class = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-apex-pdp | ssl.keystore.type = JKS 17:04:40 policy-clamp-runtime-acm | enable.idempotence = true 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.callback.handler.class = null 17:04:40 kafka | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-pap | bootstrap.servers = [kafka:9092] 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.class = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.class = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | ssl.protocol = TLSv1.3 17:04:40 policy-clamp-runtime-acm | interceptor.classes = [] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.class = null 17:04:40 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 17:04:40 policy-pap | check.crcs = true 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | ssl.provider = null 17:04:40 policy-clamp-runtime-acm | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.connect.timeout.ms = null 17:04:40 kafka | sasl.kerberos.service.name = null 17:04:40 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.read.timeout.ms = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | ssl.secure.random.implementation = null 17:04:40 policy-clamp-runtime-acm | linger.ms = 0 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.read.timeout.ms = null 17:04:40 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-pap | client.id = consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 17:04:40 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-runtime-acm | max.block.ms = 60000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.buffer.seconds = 300 17:04:40 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-pap | client.rack = 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | ssl.truststore.certificates = null 17:04:40 policy-clamp-runtime-acm | max.in.flight.requests.per.connection = 5 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 kafka | sasl.login.callback.handler.class = null 17:04:40 policy-pap | connections.max.idle.ms = 540000 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 17:04:40 policy-apex-pdp | ssl.truststore.location = null 17:04:40 policy-clamp-runtime-acm | max.request.size = 1048576 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 kafka | sasl.login.class = null 17:04:40 policy-pap | default.api.timeout.ms = 60000 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | ssl.truststore.password = null 17:04:40 policy-clamp-runtime-acm | metadata.max.age.ms = 300000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 kafka | sasl.login.connect.timeout.ms = null 17:04:40 policy-pap | enable.auto.commit = true 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | ssl.truststore.type = JKS 17:04:40 policy-clamp-runtime-acm | metadata.max.idle.ms = 300000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 kafka | sasl.login.read.timeout.ms = null 17:04:40 policy-pap | exclude.internal.topics = true 17:04:40 policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | transaction.timeout.ms = 60000 17:04:40 policy-clamp-runtime-acm | metric.reporters = [] 17:04:40 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 kafka | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-pap | fetch.max.bytes = 52428800 17:04:40 policy-clamp-ac-http-ppnt | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.ms = 100 17:04:40 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 17:04:40 policy-apex-pdp | transactional.id = null 17:04:40 policy-clamp-runtime-acm | metrics.num.samples = 2 17:04:40 policy-clamp-ac-pf-ppnt | sasl.mechanism = GSSAPI 17:04:40 kafka | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-pap | fetch.max.wait.ms = 500 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-ac-sim-ppnt | sasl.mechanism = GSSAPI 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 policy-clamp-runtime-acm | metrics.recording.level = INFO 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 kafka | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-pap | fetch.min.bytes = 1 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 17:04:40 policy-apex-pdp | 17:04:40 policy-clamp-runtime-acm | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 kafka | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-pap | group.id = 084a2e58-01c1-4612-9881-9e51d9ffa3ed 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.audience = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.465+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:04:40 policy-clamp-runtime-acm | partitioner.adaptive.partitioning.enable = true 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 kafka | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-pap | group.instance.id = null 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.486+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-runtime-acm | partitioner.availability.timeout.ms = 0 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 kafka | sasl.login.retry.backoff.ms = 100 17:04:40 policy-pap | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.487+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-runtime-acm | partitioner.class = null 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 kafka | sasl.mechanism.controller.protocol = GSSAPI 17:04:40 policy-pap | interceptor.classes = [] 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.487+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103007486 17:04:40 policy-clamp-runtime-acm | partitioner.ignore.keys = false 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 17:04:40 policy-pap | internal.leave.group.on.close = true 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.487+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=87c9230a-bdff-4a83-91ce-7ad113bd23a0, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:40 policy-clamp-runtime-acm | receive.buffer.bytes = 32768 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 kafka | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.488+00:00|INFO|ServiceManager|main] service manager starting set alive 17:04:40 policy-clamp-runtime-acm | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 kafka | sasl.oauthbearer.expected.audience = null 17:04:40 policy-pap | isolation.level = read_uncommitted 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.488+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 17:04:40 policy-clamp-runtime-acm | reconnect.backoff.ms = 50 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 kafka | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-ac-http-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.491+00:00|INFO|ServiceManager|main] service manager starting topic sinks 17:04:40 policy-clamp-runtime-acm | request.timeout.ms = 30000 17:04:40 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-pap | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-ac-http-ppnt | security.protocol = PLAINTEXT 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-runtime-acm | retries = 2147483647 17:04:40 policy-clamp-ac-pf-ppnt | security.protocol = PLAINTEXT 17:04:40 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-pap | max.poll.interval.ms = 300000 17:04:40 policy-clamp-ac-http-ppnt | security.providers = null 17:04:40 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.491+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 17:04:40 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 17:04:40 policy-clamp-runtime-acm | retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | security.providers = null 17:04:40 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-pap | max.poll.records = 500 17:04:40 policy-clamp-ac-http-ppnt | send.buffer.bytes = 131072 17:04:40 policy-clamp-ac-sim-ppnt | security.protocol = PLAINTEXT 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.493+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-runtime-acm | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-ac-pf-ppnt | send.buffer.bytes = 131072 17:04:40 kafka | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-pap | metadata.max.age.ms = 300000 17:04:40 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-ac-sim-ppnt | security.providers = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.493+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 17:04:40 policy-clamp-runtime-acm | sasl.jaas.config = null 17:04:40 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 kafka | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-pap | metric.reporters = [] 17:04:40 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | send.buffer.bytes = 131072 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.493+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 kafka | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-pap | metrics.num.samples = 2 17:04:40 policy-clamp-ac-http-ppnt | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.493+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=02b3ddfc-6c0d-4750-8519-6e56d3cb3479, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-ac-pf-ppnt | ssl.cipher.suites = null 17:04:40 kafka | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-pap | metrics.recording.level = INFO 17:04:40 policy-clamp-ac-http-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.494+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=02b3ddfc-6c0d-4750-8519-6e56d3cb3479, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.service.name = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 kafka | sasl.server.callback.handler.class = null 17:04:40 policy-pap | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-ac-http-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-ac-sim-ppnt | ssl.cipher.suites = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.494+00:00|INFO|ServiceManager|main] service manager starting Create REST server 17:04:40 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-ac-pf-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 kafka | sasl.server.max.receive.size = 524288 17:04:40 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-clamp-ac-http-ppnt | ssl.engine.factory.class = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.507+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-ac-pf-ppnt | ssl.engine.factory.class = null 17:04:40 kafka | security.inter.broker.protocol = PLAINTEXT 17:04:40 policy-pap | receive.buffer.bytes = 65536 17:04:40 policy-clamp-ac-http-ppnt | ssl.key.password = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.endpoint.identification.algorithm = https 17:04:40 policy-apex-pdp | [] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 17:04:40 policy-clamp-runtime-acm | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.key.password = null 17:04:40 kafka | security.providers = null 17:04:40 policy-pap | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-ac-http-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-ac-sim-ppnt | ssl.engine.factory.class = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.509+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-runtime-acm | sasl.login.class = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 kafka | server.max.startup.time.ms = 9223372036854775807 17:04:40 policy-pap | reconnect.backoff.ms = 50 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.key.password = null 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0cac32e0-c036-4141-842e-4613c65fc21c","timestampMs":1708103007494,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-runtime-acm | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.certificate.chain = null 17:04:40 kafka | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-pap | request.timeout.ms = 30000 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.key = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keymanager.algorithm = SunX509 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.703+00:00|INFO|ServiceManager|main] service manager starting Rest Server 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-runtime-acm | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.key = null 17:04:40 kafka | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-pap | retry.backoff.ms = 100 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.location = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.certificate.chain = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.703+00:00|INFO|ServiceManager|main] service manager starting 17:04:40 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.location = null 17:04:40 kafka | socket.listen.backlog.size = 50 17:04:40 policy-pap | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.password = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.key = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.703+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.password = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.keystore.type = JKS 17:04:40 policy-pap | sasl.jaas.config = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.location = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.703+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | socket.receive.buffer.bytes = 102400 17:04:40 policy-clamp-ac-http-ppnt | ssl.keystore.type = JKS 17:04:40 policy-clamp-ac-pf-ppnt | ssl.protocol = TLSv1.3 17:04:40 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.password = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.726+00:00|INFO|ServiceManager|main] service manager started 17:04:40 policy-clamp-runtime-acm | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-db-migrator | 17:04:40 kafka | socket.request.max.bytes = 104857600 17:04:40 policy-clamp-ac-http-ppnt | ssl.protocol = TLSv1.3 17:04:40 policy-clamp-ac-pf-ppnt | ssl.provider = null 17:04:40 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-ac-sim-ppnt | ssl.keystore.type = JKS 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.726+00:00|INFO|ServiceManager|main] service manager started 17:04:40 policy-clamp-runtime-acm | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 0570-toscadatatype.sql 17:04:40 policy-clamp-ac-http-ppnt | ssl.provider = null 17:04:40 policy-clamp-ac-pf-ppnt | ssl.secure.random.implementation = null 17:04:40 policy-pap | sasl.kerberos.service.name = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.protocol = TLSv1.3 17:04:40 policy-clamp-runtime-acm | sasl.login.retry.backoff.ms = 100 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.726+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | ssl.secure.random.implementation = null 17:04:40 kafka | socket.send.buffer.bytes = 102400 17:04:40 policy-clamp-ac-pf-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-ac-sim-ppnt | ssl.provider = null 17:04:40 policy-clamp-runtime-acm | sasl.mechanism = GSSAPI 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.727+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 17:04:40 policy-clamp-ac-http-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 kafka | ssl.cipher.suites = [] 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.certificates = null 17:04:40 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-ac-sim-ppnt | ssl.secure.random.implementation = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.771+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.certificates = null 17:04:40 kafka | ssl.client.auth = none 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.location = null 17:04:40 policy-pap | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.expected.audience = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.772+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.location = null 17:04:40 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.password = null 17:04:40 policy-pap | sasl.login.class = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.certificates = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.773+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 6 with epoch 0 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.password = null 17:04:40 kafka | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-ac-pf-ppnt | ssl.truststore.type = JKS 17:04:40 policy-pap | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.location = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.773+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:40 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 17:04:40 policy-clamp-ac-http-ppnt | ssl.truststore.type = JKS 17:04:40 kafka | ssl.engine.factory.class = null 17:04:40 policy-clamp-ac-pf-ppnt | transaction.timeout.ms = 60000 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.password = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.779+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] (Re-)joining group 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | transaction.timeout.ms = 60000 17:04:40 kafka | ssl.key.password = null 17:04:40 kafka | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-ac-pf-ppnt | transactional.id = null 17:04:40 policy-clamp-ac-sim-ppnt | ssl.truststore.type = JKS 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.796+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Request joining group due to: need to re-join with the given member-id: consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2-944c0d7e-1fbd-44f5-9ee5-8d53a0c0b80c 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 17:04:40 policy-clamp-ac-http-ppnt | transactional.id = null 17:04:40 policy-pap | sasl.login.read.timeout.ms = null 17:04:40 kafka | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-ac-pf-ppnt | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 policy-clamp-ac-sim-ppnt | transaction.timeout.ms = 60000 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.796+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 kafka | ssl.keystore.key = null 17:04:40 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-ac-pf-ppnt | 17:04:40 policy-clamp-ac-sim-ppnt | transactional.id = null 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-apex-pdp | [2024-02-16T17:03:27.796+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] (Re-)joining group 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | 17:04:40 kafka | ssl.keystore.location = null 17:04:40 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.532+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:04:40 policy-clamp-ac-sim-ppnt | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-apex-pdp | [2024-02-16T17:03:28.425+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.350+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:04:40 kafka | ssl.keystore.password = null 17:04:40 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-runtime-acm | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:28.428+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 17:04:40 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.404+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 kafka | ssl.keystore.type = JKS 17:04:40 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-sim-ppnt | 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.606+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-runtime-acm | security.protocol = PLAINTEXT 17:04:40 policy-apex-pdp | [2024-02-16T17:03:30.799+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Successfully joined group with generation Generation{generationId=1, memberId='consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2-944c0d7e-1fbd-44f5-9ee5-8d53a0c0b80c', protocol='range'} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.404+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 kafka | ssl.principal.mapping.rules = DEFAULT 17:04:40 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.526+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.606+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-runtime-acm | security.providers = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:30.805+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Finished assignment for group at generation 1: {consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2-944c0d7e-1fbd-44f5-9ee5-8d53a0c0b80c=Assignment(partitions=[policy-pdp-pap-0])} 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.404+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102973404 17:04:40 kafka | ssl.protocol = TLSv1.3 17:04:40 policy-pap | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.820+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.606+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102998606 17:04:40 policy-clamp-runtime-acm | send.buffer.bytes = 131072 17:04:40 policy-apex-pdp | [2024-02-16T17:03:30.811+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Successfully synced group in generation Generation{generationId=1, memberId='consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2-944c0d7e-1fbd-44f5-9ee5-8d53a0c0b80c', protocol='range'} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.406+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a2a67dd7-036e-47bf-8bb4-b8ac84a561a1, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.406+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantMessagePublisher$$SpringCGLIB$$0 17:04:40 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.821+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.606+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=88c2f06c-c9a4-45b3-918b-942592d06e7b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:40 policy-clamp-runtime-acm | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | [2024-02-16T17:03:30.811+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:04:40 kafka | ssl.provider = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.442+00:00|INFO|ServiceManager|main] service manager starting Listener AcPropertyUpdateListener 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.462+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantPrimeListener 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.821+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708102969820 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.606+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantMessagePublisher$$SpringCGLIB$$0 17:04:40 policy-clamp-runtime-acm | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:30.813+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Adding newly assigned partitions: policy-pdp-pap-0 17:04:40 kafka | ssl.secure.random.implementation = null 17:04:40 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionMigrationListener 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.822+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9f378aa3-8f61-4875-a93d-dde5000eb5f3, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.611+00:00|INFO|ServiceManager|main] service manager starting Listener AcPropertyUpdateListener 17:04:40 policy-clamp-runtime-acm | ssl.cipher.suites = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:30.818+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Found no committed offset for partition policy-pdp-pap-0 17:04:40 kafka | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRestartListener 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.822+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantMessagePublisher$$SpringCGLIB$$0 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.612+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantPrimeListener 17:04:40 policy-clamp-runtime-acm | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 17:04:40 policy-apex-pdp | [2024-02-16T17:03:30.825+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2, groupId=02b3ddfc-6c0d-4750-8519-6e56d3cb3479] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:40 kafka | ssl.truststore.certificates = null 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRegisterAckListener 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.826+00:00|INFO|ServiceManager|main] service manager starting Listener AcPropertyUpdateListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.612+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionMigrationListener 17:04:40 policy-clamp-runtime-acm | ssl.endpoint.identification.algorithm = https 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.494+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 17:04:40 kafka | ssl.truststore.location = null 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantStatusReqListener 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.842+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantPrimeListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.612+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRestartListener 17:04:40 policy-clamp-runtime-acm | ssl.engine.factory.class = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"89129af7-b008-4a1b-9ec6-eb95469de049","timestampMs":1708103027493,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 kafka | ssl.truststore.password = null 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionStateChangeListener 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.842+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionMigrationListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.612+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRegisterAckListener 17:04:40 policy-clamp-runtime-acm | ssl.key.password = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.521+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | ssl.truststore.type = JKS 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionDeployListener 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.842+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRestartListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.613+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantStatusReqListener 17:04:40 policy-clamp-runtime-acm | ssl.keymanager.algorithm = SunX509 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"89129af7-b008-4a1b-9ec6-eb95469de049","timestampMs":1708103027493,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 17:04:40 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantDeregisterAckListener 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.842+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRegisterAckListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.613+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionStateChangeListener 17:04:40 policy-clamp-runtime-acm | ssl.keystore.certificate.chain = null 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.527+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:40 kafka | transaction.max.timeout.ms = 900000 17:04:40 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager starting Topic Message Dispatcher 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.843+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantStatusReqListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.613+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionDeployListener 17:04:40 policy-clamp-runtime-acm | ssl.keystore.key = null 17:04:40 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.946+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | transaction.partition.verification.enable = true 17:04:40 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=32e809a3-a7c0-4e13-b7a3-aa811059e0bc, consumerInstance=policy-clamp-ac-http-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4099209b 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.843+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionStateChangeListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.613+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantDeregisterAckListener 17:04:40 policy-clamp-runtime-acm | ssl.keystore.location = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c4e106f0-4746-43d2-a87a-ad001ce96df0","timestampMs":1708103027712,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 17:04:40 policy-pap | security.protocol = PLAINTEXT 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=32e809a3-a7c0-4e13-b7a3-aa811059e0bc, consumerInstance=policy-clamp-ac-http-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.843+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionDeployListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.613+00:00|INFO|ServiceManager|main] service manager starting Topic Message Dispatcher 17:04:40 policy-clamp-runtime-acm | ssl.keystore.password = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.972+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 17:04:40 kafka | transaction.state.log.load.buffer.size = 5242880 17:04:40 policy-pap | security.providers = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.463+00:00|INFO|ServiceManager|main] service manager started 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.843+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantDeregisterAckListener 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.614+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=97317da4-3ba6-4109-8e73-20dc2312d257, consumerInstance=policy-clamp-ac-pf-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5a1f778 17:04:40 policy-clamp-runtime-acm | ssl.keystore.type = JKS 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"660a8e24-1e60-4f24-86fc-3b683cbb50d9","timestampMs":1708103027972,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 kafka | transaction.state.log.min.isr = 2 17:04:40 policy-pap | send.buffer.bytes = 131072 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.594+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.843+00:00|INFO|ServiceManager|main] service manager starting Topic Message Dispatcher 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.614+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=97317da4-3ba6-4109-8e73-20dc2312d257, consumerInstance=policy-clamp-ac-pf-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 17:04:40 policy-clamp-runtime-acm | ssl.protocol = TLSv1.3 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.972+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 17:04:40 kafka | transaction.state.log.num.partitions = 50 17:04:40 policy-pap | session.timeout.ms = 45000 17:04:40 policy-clamp-ac-http-ppnt | [] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.843+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6a2107c9-1f65-47c8-af5c-8c5cc7111397, consumerInstance=policy-clamp-ac-sim-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1200458e 17:04:40 policy-clamp-runtime-acm | ssl.provider = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.976+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:04:40 kafka | transaction.state.log.replication.factor = 3 17:04:40 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:53.602+00:00|INFO|network|main] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.844+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6a2107c9-1f65-47c8-af5c-8c5cc7111397, consumerInstance=policy-clamp-ac-sim-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.614+00:00|INFO|ServiceManager|main] service manager started 17:04:40 policy-clamp-runtime-acm | ssl.secure.random.implementation = null 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c4e106f0-4746-43d2-a87a-ad001ce96df0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c9d56859-f8bc-4d48-a940-7b0a56f9b061","timestampMs":1708103027976,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | transaction.state.log.segment.bytes = 104857600 17:04:40 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-pap | ssl.cipher.suites = null 17:04:40 policy-clamp-ac-http-ppnt | {"participantSupportedElementType":[{"id":"96f16c87-93d1-40e8-89e7-ca9ee0be53f1","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"bbb745ed-af79-4380-b7cc-c307689bffc4","timestamp":"2024-02-16T17:02:53.464057445Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.322+00:00|INFO|Metadata|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.675+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: 17:04:40 policy-clamp-runtime-acm | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.998+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | transactional.id.expiration.ms = 604800000 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.845+00:00|INFO|ServiceManager|main] service manager started 17:04:40 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:40 policy-clamp-ac-pf-ppnt | [] 17:04:40 policy-clamp-runtime-acm | ssl.truststore.certificates = null 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"660a8e24-1e60-4f24-86fc-3b683cbb50d9","timestampMs":1708103027972,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 kafka | unclean.leader.election.enable = false 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.957+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: 17:04:40 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.330+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:18.691+00:00|INFO|network|main] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | ssl.truststore.location = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:47.998+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:40 kafka | unstable.api.versions.enable = false 17:04:40 policy-clamp-ac-sim-ppnt | [] 17:04:40 policy-pap | ssl.engine.factory.class = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.331+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 17:04:40 policy-clamp-runtime-acm | ssl.truststore.password = null 17:04:40 policy-clamp-runtime-acm | ssl.truststore.type = JKS 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.004+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | zookeeper.clientCnxnSocket = null 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:49.964+00:00|INFO|network|main] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | ssl.key.password = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.349+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] (Re-)joining group 17:04:40 policy-clamp-ac-pf-ppnt | {"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"b87f9a17-6d57-4360-84d0-97780fa59145","timestamp":"2024-02-16T17:03:18.614708531Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | transaction.timeout.ms = 60000 17:04:40 kafka | zookeeper.connect = zookeeper:2181 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c4e106f0-4746-43d2-a87a-ad001ce96df0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c9d56859-f8bc-4d48-a940-7b0a56f9b061","timestampMs":1708103027976,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-ac-sim-ppnt | {"participantSupportedElementType":[{"id":"939773c5-cc6c-46b4-b31a-65f7c2af01e5","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"d4e64c45-0c5f-485a-972b-a44e9c7ca20f","timestamp":"2024-02-16T17:02:49.857138577Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.387+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Request joining group due to: need to re-join with the given member-id: consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2-69b726ae-4f77-4dda-9d1d-cb3f6a755d38 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.127+00:00|INFO|Metadata|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-clamp-runtime-acm | transactional.id = null 17:04:40 kafka | zookeeper.connection.timeout.ms = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.004+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:40 policy-pap | ssl.keystore.certificate.chain = null 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.175+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.387+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.127+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-clamp-runtime-acm | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 kafka | zookeeper.max.in.flight.requests = 10 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.066+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | ssl.keystore.key = null 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.177+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-db-migrator | > upgrade 0630-toscanodetype.sql 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.387+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] (Re-)joining group 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.129+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 3 with epoch 0 17:04:40 policy-clamp-runtime-acm | 17:04:40 kafka | zookeeper.metadata.migration.enable = false 17:04:40 policy-apex-pdp | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"042a6145-56dc-4711-9864-8edc62c6935b","timestampMs":1708103027713,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | ssl.keystore.location = null 17:04:40 policy-pap | ssl.keystore.password = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.238+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Error while fetching metadata with correlation id 2 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.129+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.086+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:04:40 kafka | zookeeper.session.timeout.ms = 18000 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.069+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.438+00:00|INFO|ParticipantMessagePublisher|main] Sent Participant Register message to CLAMP - ParticipantRegister(super=ParticipantMessage(messageType=PARTICIPANT_REGISTER, messageId=bbb745ed-af79-4380-b7cc-c307689bffc4, timestamp=2024-02-16T17:02:53.464057445Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c01, automationCompositionId=null, compositionId=null), participantSupportedElementType=[ParticipantSupportedElementType(id=96f16c87-93d1-40e8-89e7-ca9ee0be53f1, typeName=org.onap.policy.clamp.acm.HttpAutomationCompositionElement, typeVersion=1.0.0)]) 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.239+00:00|INFO|Metadata|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.138+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] (Re-)joining group 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.108+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 kafka | zookeeper.set.acl = false 17:04:40 policy-pap | ssl.keystore.type = JKS 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"042a6145-56dc-4711-9864-8edc62c6935b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"eb9c4d77-f65c-4fdc-863d-a8584c2b63f2","timestampMs":1708103028069,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:54.443+00:00|INFO|Application|main] Started Application in 18.581 seconds (process running for 20.117) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.286+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.156+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Request joining group due to: need to re-join with the given member-id: consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2-8477fa0e-b745-40f3-b6b5-c74935b7f77f 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.108+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 kafka | zookeeper.ssl.cipher.suites = null 17:04:40 policy-pap | ssl.protocol = TLSv1.3 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.082+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:57.393+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Successfully joined group with generation Generation{generationId=1, memberId='consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2-69b726ae-4f77-4dda-9d1d-cb3f6a755d38', protocol='range'} 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.311+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 5 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.156+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:40 kafka | zookeeper.ssl.client.enable = false 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.108+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103021107 17:04:40 policy-pap | ssl.provider = null 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"042a6145-56dc-4711-9864-8edc62c6935b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"eb9c4d77-f65c-4fdc-863d-a8584c2b63f2","timestampMs":1708103028069,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:57.401+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Finished assignment for group at generation 1: {consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2-69b726ae-4f77-4dda-9d1d-cb3f6a755d38=Assignment(partitions=[policy-acruntime-participant-0])} 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.372+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Error while fetching metadata with correlation id 4 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.156+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] (Re-)joining group 17:04:40 kafka | zookeeper.ssl.crl.enable = false 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.108+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a1463b30-8108-460a-8cc2-b705ea225556, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:40 policy-pap | ssl.secure.random.implementation = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.082+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:57.410+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Successfully synced group in generation Generation{generationId=1, memberId='consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2-69b726ae-4f77-4dda-9d1d-cb3f6a755d38', protocol='range'} 17:04:40 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.430+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 6 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.203+00:00|INFO|ParticipantMessagePublisher|main] Sent Participant Register message to CLAMP - ParticipantRegister(super=ParticipantMessage(messageType=PARTICIPANT_REGISTER, messageId=b87f9a17-6d57-4360-84d0-97780fa59145, timestamp=2024-02-16T17:03:18.614708531Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c03, automationCompositionId=null, compositionId=null), participantSupportedElementType=[ParticipantSupportedElementType(id=0b8ba591-6c02-4faf-8911-f6ce37e044af, typeName=org.onap.policy.clamp.acm.PolicyAutomationCompositionElement, typeVersion=1.0.0)]) 17:04:40 kafka | zookeeper.ssl.enabled.protocols = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.108+00:00|INFO|ServiceManager|main] service manager starting Publisher AutomationCompositionStateChangePublisher$$SpringCGLIB$$0 17:04:40 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.149+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:57.410+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Notifying assignor about the new Assignment(partitions=[policy-acruntime-participant-0]) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.489+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Error while fetching metadata with correlation id 6 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:19.211+00:00|INFO|PolicyParticipantApplication|main] Started PolicyParticipantApplication in 10.141 seconds (process running for 10.898) 17:04:40 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.111+00:00|INFO|ServiceManager|main] service manager starting Publisher AutomationCompositionDeployPublisher$$SpringCGLIB$$0 17:04:40 policy-pap | ssl.truststore.certificates = null 17:04:40 policy-apex-pdp | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"75d5babd-56a0-4abd-aad7-d01c728af538","timestampMs":1708103028116,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:57.414+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Adding newly assigned partitions: policy-acruntime-participant-0 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.596+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 7 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:22.161+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Successfully joined group with generation Generation{generationId=1, memberId='consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2-8477fa0e-b745-40f3-b6b5-c74935b7f77f', protocol='range'} 17:04:40 kafka | zookeeper.ssl.keystore.location = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.111+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantPrimePublisher$$SpringCGLIB$$0 17:04:40 policy-pap | ssl.truststore.location = null 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.151+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:57.423+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Found no committed offset for partition policy-acruntime-participant-0 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.648+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Error while fetching metadata with correlation id 8 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:22.172+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Finished assignment for group at generation 1: {consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2-8477fa0e-b745-40f3-b6b5-c74935b7f77f=Assignment(partitions=[policy-acruntime-participant-0])} 17:04:40 kafka | zookeeper.ssl.keystore.password = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.111+00:00|INFO|ServiceManager|main] service manager starting Publisher AutomationCompositionMigrationPublisher$$SpringCGLIB$$0 17:04:40 policy-pap | ssl.truststore.password = null 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"75d5babd-56a0-4abd-aad7-d01c728af538","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"893654c5-ad78-4d7e-a165-4a0ab3a005f9","timestampMs":1708103028151,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:02:57.434+00:00|INFO|SubscriptionState|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2, groupId=32e809a3-a7c0-4e13-b7a3-aa811059e0bc] Resetting offset for partition policy-acruntime-participant-0 to position FetchPosition{offset=3, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.915+00:00|INFO|ParticipantMessagePublisher|main] Sent Participant Register message to CLAMP - ParticipantRegister(super=ParticipantMessage(messageType=PARTICIPANT_REGISTER, messageId=d4e64c45-0c5f-485a-972b-a44e9c7ca20f, timestamp=2024-02-16T17:02:49.857138577Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c90, automationCompositionId=null, compositionId=null), participantSupportedElementType=[ParticipantSupportedElementType(id=939773c5-cc6c-46b4-b31a-65f7c2af01e5, typeName=org.onap.policy.clamp.acm.SimAutomationCompositionElement, typeVersion=1.0.0)]) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:22.180+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Successfully synced group in generation Generation{generationId=1, memberId='consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2-8477fa0e-b745-40f3-b6b5-c74935b7f77f', protocol='range'} 17:04:40 kafka | zookeeper.ssl.keystore.type = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.111+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantDeregisterAckPublisher$$SpringCGLIB$$0 17:04:40 policy-pap | ssl.truststore.type = JKS 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.160+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:19.201+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:51.918+00:00|INFO|Application|main] Started Application in 17.532 seconds (process running for 18.277) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:22.181+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Notifying assignor about the new Assignment(partitions=[policy-acruntime-participant-0]) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.111+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantRegisterAckPublisher$$SpringCGLIB$$0 17:04:40 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"75d5babd-56a0-4abd-aad7-d01c728af538","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"893654c5-ad78-4d7e-a165-4a0ab3a005f9","timestampMs":1708103028151,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | zookeeper.ssl.ocsp.enable = false 17:04:40 policy-clamp-ac-http-ppnt | {"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"b87f9a17-6d57-4360-84d0-97780fa59145","timestamp":"2024-02-16T17:03:18.614708531Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:53.023+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:22.191+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Adding newly assigned partitions: policy-acruntime-participant-0 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.111+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantStatusReqPublisher$$SpringCGLIB$$0 17:04:40 policy-pap | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:48.160+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:40 kafka | zookeeper.ssl.protocol = TLSv1.2 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:19.209+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_REGISTER 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:53.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] (Re-)joining group 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:22.199+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Found no committed offset for partition policy-acruntime-participant-0 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.111+00:00|INFO|ServiceManager|main] service manager starting Publisher AcElementPropertiesPublisher$$SpringCGLIB$$0 17:04:40 policy-pap | [2024-02-16T17:03:25.751+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-apex-pdp | [2024-02-16T17:03:57.675+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | zookeeper.ssl.truststore.location = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.196+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:53.071+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Request joining group due to: need to re-join with the given member-id: consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2-89480e25-06c6-437b-9c32-99b445dd9bb0 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:22.218+00:00|INFO|SubscriptionState|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2, groupId=97317da4-3ba6-4109-8e73-20dc2312d257] Resetting offset for partition policy-acruntime-participant-0 to position FetchPosition{offset=4, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.111+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantRestartPublisher$$SpringCGLIB$$0 17:04:40 policy-pap | [2024-02-16T17:03:25.751+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 kafka | zookeeper.ssl.truststore.password = null 17:04:40 policy-clamp-ac-http-ppnt | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"751ed6c0-ea59-4009-9448-d8d54dacf1ac","timestamp":"2024-02-16T17:03:47.116528928Z"} 17:04:40 policy-apex-pdp | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"01a3277e-772a-4c95-b79a-4ecc96f9bfb9","timestampMs":1708103037546,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:53.072+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.203+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.112+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRegisterListener 17:04:40 policy-pap | [2024-02-16T17:03:25.751+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103005751 17:04:40 kafka | zookeeper.ssl.truststore.type = null 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.253+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | [2024-02-16T17:03:57.706+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:apex/tosca/policy/list 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:53.073+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] (Re-)joining group 17:04:40 policy-clamp-ac-pf-ppnt | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"751ed6c0-ea59-4009-9448-d8d54dacf1ac","timestamp":"2024-02-16T17:03:47.116528928Z"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.112+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantStatusListener 17:04:40 policy-pap | [2024-02-16T17:03:25.751+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Subscribed to topic(s): policy-pdp-pap 17:04:40 kafka | (kafka.server.KafkaConfig) 17:04:40 policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"96f16c87-93d1-40e8-89e7-ca9ee0be53f1","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"90887c49-7ec2-421b-9586-f22755afb378","timestamp":"2024-02-16T17:03:47.203750282Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-apex-pdp | [2024-02-16T17:03:57.751+00:00|INFO|ApexEngineHandler|KAFKA-source-policy-pdp-pap] Starting apex engine for policy onap.policies.native.apex.ac.element 1.0.0 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:56.105+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Successfully joined group with generation Generation{generationId=1, memberId='consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2-89480e25-06c6-437b-9c32-99b445dd9bb0', protocol='range'} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.219+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.112+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionStateChangeAckListener 17:04:40 policy-pap | [2024-02-16T17:03:25.752+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 17:04:40 kafka | [2024-02-16 17:02:32,471] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.263+00:00|INFO|ParticipantMessagePublisher|KAFKA-source-policy-acruntime-participant] Sent Participant Status message to CLAMP - ParticipantStatus(super=ParticipantMessage(messageType=PARTICIPANT_STATUS, messageId=90887c49-7ec2-421b-9586-f22755afb378, timestamp=2024-02-16T17:03:47.203750282Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c01, automationCompositionId=null, compositionId=null), state=ON_LINE, participantDefinitionUpdates=[], automationCompositionInfoList=[], participantSupportedElementType=[ParticipantSupportedElementType(id=96f16c87-93d1-40e8-89e7-ca9ee0be53f1, typeName=org.onap.policy.clamp.acm.HttpAutomationCompositionElement, typeVersion=1.0.0)]) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:57.835+00:00|INFO|EngineServiceImpl|KAFKA-source-policy-pdp-pap] Created apex engine MyApexEngine-0:0.0.1 . 17:04:40 policy-db-migrator | > upgrade 0660-toscaparameter.sql 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:56.142+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Finished assignment for group at generation 1: {consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2-89480e25-06c6-437b-9c32-99b445dd9bb0=Assignment(partitions=[policy-acruntime-participant-0])} 17:04:40 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"29e7ea94-9dca-43f0-a2b0-3f661708aa9f","timestamp":"2024-02-16T17:03:47.211534907Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.113+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionUpdateAckListener 17:04:40 policy-pap | [2024-02-16T17:03:25.752+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=11f41433-eb08-4e90-84ba-1b4e7e546b71, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2148b47e 17:04:40 kafka | [2024-02-16 17:02:32,475] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.272+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | [2024-02-16T17:03:57.835+00:00|INFO|EngineServiceImpl|KAFKA-source-policy-pdp-pap] Created apex engine MyApexEngine-1:0.0.1 . 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:56.183+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Successfully synced group in generation Generation{generationId=1, memberId='consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2-89480e25-06c6-437b-9c32-99b445dd9bb0', protocol='range'} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.242+00:00|INFO|ParticipantMessagePublisher|KAFKA-source-policy-acruntime-participant] Sent Participant Status message to CLAMP - ParticipantStatus(super=ParticipantMessage(messageType=PARTICIPANT_STATUS, messageId=29e7ea94-9dca-43f0-a2b0-3f661708aa9f, timestamp=2024-02-16T17:03:47.211534907Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c03, automationCompositionId=null, compositionId=null), state=ON_LINE, participantDefinitionUpdates=[], automationCompositionInfoList=[], participantSupportedElementType=[ParticipantSupportedElementType(id=0b8ba591-6c02-4faf-8911-f6ce37e044af, typeName=org.onap.policy.clamp.acm.PolicyAutomationCompositionElement, typeVersion=1.0.0)]) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.113+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantDeregisterListener 17:04:40 policy-pap | [2024-02-16T17:03:25.752+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=11f41433-eb08-4e90-84ba-1b4e7e546b71, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:04:40 kafka | [2024-02-16 17:02:32,478] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:57.835+00:00|INFO|EngineServiceImpl|KAFKA-source-policy-pdp-pap] APEX service created. 17:04:40 policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"939773c5-cc6c-46b4-b31a-65f7c2af01e5","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"b2d45c0a-d1ba-4949-b418-95d49ee361f7","timestamp":"2024-02-16T17:03:47.198657381Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:56.185+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Notifying assignor about the new Assignment(partitions=[policy-acruntime-participant-0]) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.248+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:25.752+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:40 kafka | [2024-02-16 17:02:32,480] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:57.919+00:00|INFO|EngineServiceImpl|KAFKA-source-policy-pdp-pap] Registering apex model on engine MyApexEngine-0:0.0.1 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.272+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:56.190+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Adding newly assigned partitions: policy-acruntime-participant-0 17:04:40 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"939773c5-cc6c-46b4-b31a-65f7c2af01e5","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"b2d45c0a-d1ba-4949-b418-95d49ee361f7","timestamp":"2024-02-16T17:03:47.198657381Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.113+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantPrimeAckListener 17:04:40 policy-db-migrator | 17:04:40 policy-pap | allow.auto.create.topics = true 17:04:40 kafka | [2024-02-16 17:02:32,512] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.448+00:00|INFO|EngineServiceImpl|KAFKA-source-policy-pdp-pap] Registering apex model on engine MyApexEngine-1:0.0.1 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.273+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:56.210+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Found no committed offset for partition policy-acruntime-participant-0 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.249+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.113+00:00|INFO|ServiceManager|main] service manager starting Topic Message Dispatcher 17:04:40 policy-db-migrator | 17:04:40 policy-pap | auto.commit.interval.ms = 5000 17:04:40 kafka | [2024-02-16 17:02:32,518] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.540+00:00|INFO|EngineServiceImpl|KAFKA-source-policy-pdp-pap] Added the action listener to the engine 17:04:40 policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"29e7ea94-9dca-43f0-a2b0-3f661708aa9f","timestamp":"2024-02-16T17:03:47.211534907Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:02:56.227+00:00|INFO|SubscriptionState|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2, groupId=6a2107c9-1f65-47c8-af5c-8c5cc7111397] Resetting offset for partition policy-acruntime-participant-0 to position FetchPosition{offset=3, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.249+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.113+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0b0f93e1-9727-45a5-b97d-714a24b64a62, consumerInstance=policy-clamp-runtime-acm, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3ff26c9 17:04:40 policy-db-migrator | > upgrade 0670-toscapolicies.sql 17:04:40 policy-pap | auto.include.jmx.reporter = true 17:04:40 kafka | [2024-02-16 17:02:32,526] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.541+00:00|INFO|EngineServiceImpl|KAFKA-source-policy-pdp-pap] Added the action listener to the engine 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.274+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:19.197+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.113+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0b0f93e1-9727-45a5-b97d-714a24b64a62, consumerInstance=policy-clamp-runtime-acm, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 17:04:40 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"29e7ea94-9dca-43f0-a2b0-3f661708aa9f","timestamp":"2024-02-16T17:03:47.211534907Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | auto.offset.reset = latest 17:04:40 kafka | [2024-02-16 17:02:32,528] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.548+00:00|INFO|ConsumerConfig|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] ConsumerConfig values: 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.277+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.114+00:00|INFO|ServiceManager|main] service manager started 17:04:40 policy-clamp-ac-sim-ppnt | {"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"b87f9a17-6d57-4360-84d0-97780fa59145","timestamp":"2024-02-16T17:03:18.614708531Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.249+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 17:04:40 policy-pap | bootstrap.servers = [kafka:9092] 17:04:40 kafka | [2024-02-16 17:02:32,529] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 17:04:40 policy-apex-pdp | allow.auto.create.topics = true 17:04:40 policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"c5bce600-be93-48d6-9321-677a64168aee","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"757abd77-5d53-4c4b-b040-1abc1768cd48","timestamp":"2024-02-16T17:03:47.217554434Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.115+00:00|INFO|Application|main] Started Application in 11.673 seconds (process running for 12.528) 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:19.201+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_REGISTER 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.267+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | check.crcs = true 17:04:40 kafka | [2024-02-16 17:02:32,541] INFO Starting the log cleaner (kafka.log.LogCleaner) 17:04:40 policy-apex-pdp | auto.commit.interval.ms = 1000 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.278+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.613+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.189+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"c5bce600-be93-48d6-9321-677a64168aee","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"757abd77-5d53-4c4b-b040-1abc1768cd48","timestamp":"2024-02-16T17:03:47.217554434Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-db-migrator | 17:04:40 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:40 kafka | [2024-02-16 17:02:32,590] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 17:04:40 policy-apex-pdp | auto.include.jmx.reporter = true 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.278+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.615+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 7 with epoch 0 17:04:40 policy-clamp-ac-sim-ppnt | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"751ed6c0-ea59-4009-9448-d8d54dacf1ac","timestamp":"2024-02-16T17:03:47.116528928Z"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.267+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-db-migrator | 17:04:40 policy-pap | client.id = consumer-policy-pap-4 17:04:40 kafka | [2024-02-16 17:02:32,624] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 17:04:40 policy-apex-pdp | auto.offset.reset = latest 17:04:40 policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"96f16c87-93d1-40e8-89e7-ca9ee0be53f1","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"90887c49-7ec2-421b-9586-f22755afb378","timestamp":"2024-02-16T17:03:47.203750282Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.617+00:00|INFO|Metadata|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.210+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.275+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 17:04:40 policy-pap | client.rack = 17:04:40 kafka | [2024-02-16 17:02:32,637] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 17:04:40 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:04:40 policy-apex-pdp | check.crcs = true 17:04:40 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:04:40 policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"939773c5-cc6c-46b4-b31a-65f7c2af01e5","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"b2d45c0a-d1ba-4949-b418-95d49ee361f7","timestamp":"2024-02-16T17:03:47.198657381Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"96f16c87-93d1-40e8-89e7-ca9ee0be53f1","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"90887c49-7ec2-421b-9586-f22755afb378","timestamp":"2024-02-16T17:03:47.203750282Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | connections.max.idle.ms = 540000 17:04:40 policy-apex-pdp | client.id = consumer-clamp-grp-3 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.235+00:00|INFO|ParticipantMessagePublisher|KAFKA-source-policy-acruntime-participant] Sent Participant Status message to CLAMP - ParticipantStatus(super=ParticipantMessage(messageType=PARTICIPANT_STATUS, messageId=b2d45c0a-d1ba-4949-b418-95d49ee361f7, timestamp=2024-02-16T17:03:47.198657381Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c90, automationCompositionId=null, compositionId=null), state=ON_LINE, participantDefinitionUpdates=[], automationCompositionInfoList=[], participantSupportedElementType=[ParticipantSupportedElementType(id=939773c5-cc6c-46b4-b31a-65f7c2af01e5, typeName=org.onap.policy.clamp.acm.SimAutomationCompositionElement, typeVersion=1.0.0)]) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.276+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.279+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 policy-pap | default.api.timeout.ms = 60000 17:04:40 policy-apex-pdp | client.rack = 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.243+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.619+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.762+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.767+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | enable.auto.commit = true 17:04:40 policy-apex-pdp | connections.max.idle.ms = 540000 17:04:40 policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"939773c5-cc6c-46b4-b31a-65f7c2af01e5","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"b2d45c0a-d1ba-4949-b418-95d49ee361f7","timestamp":"2024-02-16T17:03:47.198657381Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.628+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] (Re-)joining group 17:04:40 policy-clamp-ac-pf-ppnt | {"participantDefinitionUpdates":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"Ericsson","startPhase":0},"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the operational policy for Performance Management Subscription Handling"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Starter"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Bridge"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Sink"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Starter microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Bridge microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Sink microservice"},"outProperties":{}}]}],"messageType":"PARTICIPANT_PRIME","messageId":"fc518aed-3741-43ec-b597-0cd9ccf000cb","timestamp":"2024-02-16T17:03:47.714617694Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-http-ppnt | {"participantDefinitionUpdates":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"Ericsson","startPhase":0},"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the operational policy for Performance Management Subscription Handling"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Starter"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Bridge"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Sink"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Starter microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Bridge microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Sink microservice"},"outProperties":{}}]}],"messageType":"PARTICIPANT_PRIME","messageId":"fc518aed-3741-43ec-b597-0cd9ccf000cb","timestamp":"2024-02-16T17:03:47.714617694Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-db-migrator | 17:04:40 policy-pap | exclude.internal.topics = true 17:04:40 kafka | [2024-02-16 17:02:32,668] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 17:04:40 policy-apex-pdp | default.api.timeout.ms = 60000 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.244+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Request joining group due to: need to re-join with the given member-id: consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2-95b41a35-bc4e-4e57-9f7d-94d8dfef3b72 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.814+00:00|INFO|network|pool-4-thread-1] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.816+00:00|INFO|network|pool-4-thread-1] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 policy-pap | fetch.max.bytes = 52428800 17:04:40 kafka | [2024-02-16 17:02:33,020] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 17:04:40 policy-apex-pdp | enable.auto.commit = true 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.244+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:40 policy-clamp-ac-pf-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-clamp-ac-http-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-db-migrator | > upgrade 0690-toscapolicy.sql 17:04:40 policy-pap | fetch.max.wait.ms = 500 17:04:40 kafka | [2024-02-16 17:02:33,042] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 17:04:40 policy-apex-pdp | exclude.internal.topics = true 17:04:40 policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"29e7ea94-9dca-43f0-a2b0-3f661708aa9f","timestamp":"2024-02-16T17:03:47.211534907Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:41.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] (Re-)joining group 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.819+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.819+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | fetch.min.bytes = 1 17:04:40 kafka | [2024-02-16 17:02:33,043] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 17:04:40 policy-apex-pdp | fetch.max.bytes = 52428800 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.244+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:44.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Successfully joined group with generation Generation{generationId=1, memberId='consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2-95b41a35-bc4e-4e57-9f7d-94d8dfef3b72', protocol='range'} 17:04:40 policy-clamp-ac-pf-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-clamp-ac-http-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 17:04:40 policy-pap | group.id = policy-pap 17:04:40 kafka | [2024-02-16 17:02:33,048] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 17:04:40 policy-apex-pdp | fetch.max.wait.ms = 500 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.262+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:44.660+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Finished assignment for group at generation 1: {consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2-95b41a35-bc4e-4e57-9f7d-94d8dfef3b72=Assignment(partitions=[policy-acruntime-participant-0])} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.820+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.820+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | group.instance.id = null 17:04:40 kafka | [2024-02-16 17:02:33,053] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 17:04:40 policy-apex-pdp | fetch.min.bytes = 1 17:04:40 policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"c5bce600-be93-48d6-9321-677a64168aee","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"757abd77-5d53-4c4b-b040-1abc1768cd48","timestamp":"2024-02-16T17:03:47.217554434Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:44.670+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Successfully synced group in generation Generation{generationId=1, memberId='consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2-95b41a35-bc4e-4e57-9f7d-94d8dfef3b72', protocol='range'} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.830+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.831+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 policy-pap | heartbeat.interval.ms = 3000 17:04:40 kafka | [2024-02-16 17:02:33,081] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-apex-pdp | group.id = clamp-grp 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.262+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:44.671+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Notifying assignor about the new Assignment(partitions=[policy-acruntime-participant-0]) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:44.675+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Adding newly assigned partitions: policy-acruntime-participant-0 17:04:40 policy-clamp-ac-pf-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-db-migrator | 17:04:40 policy-pap | interceptor.classes = [] 17:04:40 kafka | [2024-02-16 17:02:33,083] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-apex-pdp | group.instance.id = null 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.270+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:44.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Found no committed offset for partition policy-acruntime-participant-0 17:04:40 policy-clamp-ac-http-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.830+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 17:04:40 policy-pap | internal.leave.group.on.close = true 17:04:40 kafka | [2024-02-16 17:02:33,086] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-apex-pdp | heartbeat.interval.ms = 3000 17:04:40 policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"96f16c87-93d1-40e8-89e7-ca9ee0be53f1","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"90887c49-7ec2-421b-9586-f22755afb378","timestamp":"2024-02-16T17:03:47.203750282Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:44.699+00:00|INFO|SubscriptionState|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2, groupId=0b0f93e1-9727-45a5-b97d-714a24b64a62] Resetting offset for partition policy-acruntime-participant-0 to position FetchPosition{offset=4, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.831+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.841+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 kafka | [2024-02-16 17:02:33,086] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-apex-pdp | interceptor.classes = [] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.270+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:45.863+00:00|INFO|[/onap/policy/clamp/acm]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.838+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 17:04:40 policy-pap | isolation.level = read_uncommitted 17:04:40 kafka | [2024-02-16 17:02:33,087] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-apex-pdp | internal.leave.group.on.close = true 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.754+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:45.864+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 17:04:40 policy-clamp-ac-http-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:47.841+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 kafka | [2024-02-16 17:02:33,106] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 17:04:40 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:45.867+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 17:04:40 policy-clamp-ac-sim-ppnt | {"participantDefinitionUpdates":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"Ericsson","startPhase":0},"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the operational policy for Performance Management Subscription Handling"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Starter"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Bridge"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Sink"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Starter microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Bridge microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Sink microservice"},"outProperties":{}}]}],"messageType":"PARTICIPANT_PRIME","messageId":"fc518aed-3741-43ec-b597-0cd9ccf000cb","timestamp":"2024-02-16T17:03:47.714617694Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:47.839+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:53.695+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 policy-pap | max.partition.fetch.bytes = 1048576 17:04:40 kafka | [2024-02-16 17:02:33,106] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 17:04:40 policy-apex-pdp | isolation.level = read_uncommitted 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.120+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.814+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:53.700+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-pf-ppnt | {"participantUpdatesList":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","acElementList":[{"id":"709c62b3-8918-41b9-a747-d21eb79c6c20","definition":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"orderedState":"DEPLOY","toscaServiceTemplateFragment":{"data_types":{"onap.datatypes.ToscaConceptIdentifier":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","required":true},"version":{"name":"version","type":"string","type_version":"0.0.0","required":true}},"name":"onap.datatypes.ToscaConceptIdentifier","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EngineService":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the engine name","default":"ApexEngineService","required":false},"version":{"name":"version","type":"string","type_version":"0.0.0","description":"Specifies the engine version in double dotted format","default":"1.0.0","required":false},"id":{"name":"id","type":"integer","type_version":"0.0.0","description":"Specifies the engine id","required":true},"instance_count":{"name":"instance_count","type":"integer","type_version":"0.0.0","description":"Specifies the number of engine threads that should be run","required":true},"deployment_port":{"name":"deployment_port","type":"integer","type_version":"0.0.0","description":"Specifies the port to connect to for engine administration","default":1.0,"required":false},"policy_model_file_name":{"name":"policy_model_file_name","type":"string","type_version":"0.0.0","description":"The name of the file from which to read the APEX policy model","required":false},"policy_type_impl":{"name":"policy_type_impl","type":"string","type_version":"0.0.0","description":"The policy type implementation from which to read the APEX policy model","required":false},"periodic_event_period":{"name":"periodic_event_period","type":"string","type_version":"0.0.0","description":"The time interval in milliseconds for the periodic scanning event, 0 means don't scan","required":false},"engine":{"name":"engine","type":"onap.datatypes.native.apex.engineservice.Engine","type_version":"0.0.0","description":"The parameters for all engines in the APEX engine service","required":true}},"name":"onap.datatypes.native.apex.EngineService","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventHandler":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the event handler name, if not specified this is set to the key name","required":false},"carrier_technology":{"name":"carrier_technology","type":"onap.datatypes.native.apex.CarrierTechnology","type_version":"0.0.0","description":"Specifies the carrier technology of the event handler (such as REST/Web Socket/Kafka)","required":true},"event_protocol":{"name":"event_protocol","type":"onap.datatypes.native.apex.EventProtocol","type_version":"0.0.0","description":"Specifies the event protocol of events for the event handler (such as Yaml/JSON/XML/POJO)","required":true},"event_name":{"name":"event_name","type":"string","type_version":"0.0.0","description":"Specifies the event name for events on this event handler, if not specified, the event name is read from or written to the event being received or sent","required":false},"event_name_filter":{"name":"event_name_filter","type":"string","type_version":"0.0.0","description":"Specifies a filter as a regular expression, events that do not match the filter are dropped, the default is to let all events through","required":false},"synchronous_mode":{"name":"synchronous_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is syncronous (receive event and send response)","default":false,"required":false},"synchronous_peer":{"name":"synchronous_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in synchronous mode, this parameter is mandatory if the event handler is in synchronous mode","required":false},"synchronous_timeout":{"name":"synchronous_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for responses to be issued by APEX torequests, this parameter is mandatory if the event handler is in synchronous mode","required":false},"requestor_mode":{"name":"requestor_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is in requestor mode (send event and wait for response mode)","default":false,"required":false},"requestor_peer":{"name":"requestor_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in requestor mode, this parameter is mandatory if the event handler is in requestor mode","required":false},"requestor_timeout":{"name":"requestor_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for wait for responses to requests, this parameter is mandatory if the event handler is in requestor mode","required":false}},"name":"onap.datatypes.native.apex.EventHandler","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.CarrierTechnology":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the carrier technology (such as REST, Kafka, WebSocket)","required":true},"plugin_parameter_class_name":{"name":"plugin_parameter_class_name","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of event input or output for this carrier technology, defaults to the supplied input or output class","required":false}},"name":"onap.datatypes.native.apex.CarrierTechnology","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventProtocol":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the event protocol (such as Yaml, JSON, XML, or POJO)","required":true},"event_protocol_plugin_class":{"name":"event_protocol_plugin_class","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of the event protocol for this carrier technology, defaults to the supplied event protocol class","required":false}},"name":"onap.datatypes.native.apex.EventProtocol","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Environment":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the environment variable","required":true},"value":{"name":"value","type":"string","type_version":"0.0.0","description":"The value of the environment variable","required":true}},"name":"onap.datatypes.native.apex.Environment","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.Engine":{"properties":{"context":{"name":"context","type":"onap.datatypes.native.apex.engineservice.engine.Context","type_version":"0.0.0","description":"The properties for handling context in APEX engines, defaults to using Java maps for context","required":false},"executors":{"name":"executors","type":"map","type_version":"0.0.0","description":"The plugins for policy executors used in engines such as javascript, MVEL, Jython","required":true,"entry_schema":{"type":"string","type_version":"0.0.0","description":"The plugin class path for this policy executor"}}},"name":"onap.datatypes.native.apex.engineservice.Engine","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.engine.Context":{"properties":{"distributor":{"name":"distributor","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for distributing context between APEX PDPs at runtime","required":false},"schemas":{"name":"schemas","type":"map","type_version":"0.0.0","description":"The plugins for context schemas available in APEX PDPs such as Java and Avro","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0"}},"locking":{"name":"locking","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for locking context in and between APEX PDPs at runtime","required":false},"persistence":{"name":"persistence","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for persisting context for APEX PDPs at runtime","required":false}},"name":"onap.datatypes.native.apex.engineservice.engine.Context","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Plugin":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the executor such as Javascript, Jython or MVEL","required":true},"plugin_class_name":{"name":"plugin_class_name","type":"string","type_version":"0.0.0","description":"The class path of the plugin class for this executor","required":false}},"name":"onap.datatypes.native.apex.Plugin","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest":{"properties":{"restRequestId":{"name":"restRequestId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a REST request to be sent to a REST endpoint","required":true},"httpMethod":{"name":"httpMethod","type":"string","type_version":"0.0.0","description":"The REST method to use","required":true,"constraints":[{"valid_values":["POST","PUT","GET","DELETE"]}]},"path":{"name":"path","type":"string","type_version":"0.0.0","description":"The path of the REST request relative to the base URL","required":true},"body":{"name":"body","type":"string","type_version":"0.0.0","description":"The body of the REST request for PUT and POST requests","required":false},"expectedResponse":{"name":"expectedResponse","type":"integer","type_version":"0.0.0","description":"THe expected HTTP status code for the REST request","required":true,"constraints":[]}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity":{"properties":{"configurationEntityId":{"name":"configurationEntityId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a Configuration Entity to be handled by the HTTP Automation Composition Element","required":true},"restSequence":{"name":"restSequence","type":"list","type_version":"0.0.0","description":"A sequence of REST commands to send to the REST endpoint","required":false,"entry_schema":{"type":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","type_version":"1.0.0"}}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}}},"policy_types":{"onap.policies.Native":{"name":"onap.policies.Native","version":"1.0.0","derived_from":"tosca.policies.Root","metadata":{},"description":"a base policy type for all native PDP policies"},"onap.policies.native.Apex":{"properties":{"engine_service":{"name":"engine_service","type":"onap.datatypes.native.apex.EngineService","type_version":"0.0.0","description":"APEX Engine Service Parameters","required":false},"inputs":{"name":"inputs","type":"map","type_version":"0.0.0","description":"Inputs for handling events coming into the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"outputs":{"name":"outputs","type":"map","type_version":"0.0.0","description":"Outputs for handling events going out of the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"environment":{"name":"environment","type":"list","type_version":"0.0.0","description":"Envioronmental parameters for the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Environment","type_version":"0.0.0"}}},"name":"onap.policies.native.Apex","version":"1.0.0","derived_from":"onap.policies.Native","metadata":{},"description":"a policy type for native apex policies"}},"topology_template":{"policies":[{"onap.policies.native.apex.ac.element":{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}}]},"name":"NULL","version":"0.0.0"},"properties":{"policy_type_id":{"name":"onap.policies.native.Apex","version":"1.0.0"},"policy_id":{"get_input":"acm_element_policy"}}}]}],"startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_DEPLOY","messageId":"46590b55-0c49-46ae-b243-90cfb0a03d4c","timestamp":"2024-02-16T17:03:53.679413689Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-pap | max.poll.interval.ms = 300000 17:04:40 kafka | [2024-02-16 17:02:33,130] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 17:04:40 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-clamp-runtime-acm | [] 17:04:40 policy-clamp-ac-sim-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 17:04:40 policy-clamp-ac-http-ppnt | {"participantUpdatesList":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","acElementList":[{"id":"709c62b3-8918-41b9-a747-d21eb79c6c20","definition":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"orderedState":"DEPLOY","toscaServiceTemplateFragment":{"data_types":{"onap.datatypes.ToscaConceptIdentifier":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","required":true},"version":{"name":"version","type":"string","type_version":"0.0.0","required":true}},"name":"onap.datatypes.ToscaConceptIdentifier","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EngineService":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the engine name","default":"ApexEngineService","required":false},"version":{"name":"version","type":"string","type_version":"0.0.0","description":"Specifies the engine version in double dotted format","default":"1.0.0","required":false},"id":{"name":"id","type":"integer","type_version":"0.0.0","description":"Specifies the engine id","required":true},"instance_count":{"name":"instance_count","type":"integer","type_version":"0.0.0","description":"Specifies the number of engine threads that should be run","required":true},"deployment_port":{"name":"deployment_port","type":"integer","type_version":"0.0.0","description":"Specifies the port to connect to for engine administration","default":1.0,"required":false},"policy_model_file_name":{"name":"policy_model_file_name","type":"string","type_version":"0.0.0","description":"The name of the file from which to read the APEX policy model","required":false},"policy_type_impl":{"name":"policy_type_impl","type":"string","type_version":"0.0.0","description":"The policy type implementation from which to read the APEX policy model","required":false},"periodic_event_period":{"name":"periodic_event_period","type":"string","type_version":"0.0.0","description":"The time interval in milliseconds for the periodic scanning event, 0 means don't scan","required":false},"engine":{"name":"engine","type":"onap.datatypes.native.apex.engineservice.Engine","type_version":"0.0.0","description":"The parameters for all engines in the APEX engine service","required":true}},"name":"onap.datatypes.native.apex.EngineService","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventHandler":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the event handler name, if not specified this is set to the key name","required":false},"carrier_technology":{"name":"carrier_technology","type":"onap.datatypes.native.apex.CarrierTechnology","type_version":"0.0.0","description":"Specifies the carrier technology of the event handler (such as REST/Web Socket/Kafka)","required":true},"event_protocol":{"name":"event_protocol","type":"onap.datatypes.native.apex.EventProtocol","type_version":"0.0.0","description":"Specifies the event protocol of events for the event handler (such as Yaml/JSON/XML/POJO)","required":true},"event_name":{"name":"event_name","type":"string","type_version":"0.0.0","description":"Specifies the event name for events on this event handler, if not specified, the event name is read from or written to the event being received or sent","required":false},"event_name_filter":{"name":"event_name_filter","type":"string","type_version":"0.0.0","description":"Specifies a filter as a regular expression, events that do not match the filter are dropped, the default is to let all events through","required":false},"synchronous_mode":{"name":"synchronous_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is syncronous (receive event and send response)","default":false,"required":false},"synchronous_peer":{"name":"synchronous_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in synchronous mode, this parameter is mandatory if the event handler is in synchronous mode","required":false},"synchronous_timeout":{"name":"synchronous_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for responses to be issued by APEX torequests, this parameter is mandatory if the event handler is in synchronous mode","required":false},"requestor_mode":{"name":"requestor_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is in requestor mode (send event and wait for response mode)","default":false,"required":false},"requestor_peer":{"name":"requestor_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in requestor mode, this parameter is mandatory if the event handler is in requestor mode","required":false},"requestor_timeout":{"name":"requestor_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for wait for responses to requests, this parameter is mandatory if the event handler is in requestor mode","required":false}},"name":"onap.datatypes.native.apex.EventHandler","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.CarrierTechnology":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the carrier technology (such as REST, Kafka, WebSocket)","required":true},"plugin_parameter_class_name":{"name":"plugin_parameter_class_name","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of event input or output for this carrier technology, defaults to the supplied input or output class","required":false}},"name":"onap.datatypes.native.apex.CarrierTechnology","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventProtocol":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the event protocol (such as Yaml, JSON, XML, or POJO)","required":true},"event_protocol_plugin_class":{"name":"event_protocol_plugin_class","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of the event protocol for this carrier technology, defaults to the supplied event protocol class","required":false}},"name":"onap.datatypes.native.apex.EventProtocol","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Environment":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the environment variable","required":true},"value":{"name":"value","type":"string","type_version":"0.0.0","description":"The value of the environment variable","required":true}},"name":"onap.datatypes.native.apex.Environment","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.Engine":{"properties":{"context":{"name":"context","type":"onap.datatypes.native.apex.engineservice.engine.Context","type_version":"0.0.0","description":"The properties for handling context in APEX engines, defaults to using Java maps for context","required":false},"executors":{"name":"executors","type":"map","type_version":"0.0.0","description":"The plugins for policy executors used in engines such as javascript, MVEL, Jython","required":true,"entry_schema":{"type":"string","type_version":"0.0.0","description":"The plugin class path for this policy executor"}}},"name":"onap.datatypes.native.apex.engineservice.Engine","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.engine.Context":{"properties":{"distributor":{"name":"distributor","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for distributing context between APEX PDPs at runtime","required":false},"schemas":{"name":"schemas","type":"map","type_version":"0.0.0","description":"The plugins for context schemas available in APEX PDPs such as Java and Avro","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0"}},"locking":{"name":"locking","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for locking context in and between APEX PDPs at runtime","required":false},"persistence":{"name":"persistence","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for persisting context for APEX PDPs at runtime","required":false}},"name":"onap.datatypes.native.apex.engineservice.engine.Context","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Plugin":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the executor such as Javascript, Jython or MVEL","required":true},"plugin_class_name":{"name":"plugin_class_name","type":"string","type_version":"0.0.0","description":"The class path of the plugin class for this executor","required":false}},"name":"onap.datatypes.native.apex.Plugin","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest":{"properties":{"restRequestId":{"name":"restRequestId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a REST request to be sent to a REST endpoint","required":true},"httpMethod":{"name":"httpMethod","type":"string","type_version":"0.0.0","description":"The REST method to use","required":true,"constraints":[{"valid_values":["POST","PUT","GET","DELETE"]}]},"path":{"name":"path","type":"string","type_version":"0.0.0","description":"The path of the REST request relative to the base URL","required":true},"body":{"name":"body","type":"string","type_version":"0.0.0","description":"The body of the REST request for PUT and POST requests","required":false},"expectedResponse":{"name":"expectedResponse","type":"integer","type_version":"0.0.0","description":"THe expected HTTP status code for the REST request","required":true,"constraints":[]}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity":{"properties":{"configurationEntityId":{"name":"configurationEntityId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a Configuration Entity to be handled by the HTTP Automation Composition Element","required":true},"restSequence":{"name":"restSequence","type":"list","type_version":"0.0.0","description":"A sequence of REST commands to send to the REST endpoint","required":false,"entry_schema":{"type":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","type_version":"1.0.0"}}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}}},"policy_types":{"onap.policies.Native":{"name":"onap.policies.Native","version":"1.0.0","derived_from":"tosca.policies.Root","metadata":{},"description":"a base policy type for all native PDP policies"},"onap.policies.native.Apex":{"properties":{"engine_service":{"name":"engine_service","type":"onap.datatypes.native.apex.EngineService","type_version":"0.0.0","description":"APEX Engine Service Parameters","required":false},"inputs":{"name":"inputs","type":"map","type_version":"0.0.0","description":"Inputs for handling events coming into the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"outputs":{"name":"outputs","type":"map","type_version":"0.0.0","description":"Outputs for handling events going out of the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"environment":{"name":"environment","type":"list","type_version":"0.0.0","description":"Envioronmental parameters for the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Environment","type_version":"0.0.0"}}},"name":"onap.policies.native.Apex","version":"1.0.0","derived_from":"onap.policies.Native","metadata":{},"description":"a policy type for native apex policies"}},"topology_template":{"policies":[{"onap.policies.native.apex.ac.element":{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}}]},"name":"NULL","version":"0.0.0"},"properties":{"policy_type_id":{"name":"onap.policies.native.Apex","version":"1.0.0"},"policy_id":{"get_input":"acm_element_policy"}}}]}],"startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_DEPLOY","messageId":"46590b55-0c49-46ae-b243-90cfb0a03d4c","timestamp":"2024-02-16T17:03:53.679413689Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:53.766+00:00|INFO|AutomationCompositionElementHandler|pool-4-thread-2] Found Policy Types in automation composition definition: NULL , Creating Policy Types 17:04:40 policy-pap | max.poll.records = 500 17:04:40 kafka | [2024-02-16 17:02:33,209] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1708102953193,1708102953193,1,0,0,72057616930504705,258,0,27 17:04:40 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.121+00:00|INFO|network|http-nio-6969-exec-3] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.815+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:57.710+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:54.106+00:00|INFO|GsonMessageBodyHandler|pool-4-thread-2] Using GSON for REST calls 17:04:40 policy-pap | metadata.max.age.ms = 300000 17:04:40 kafka | (kafka.zk.KafkaZkClient) 17:04:40 policy-apex-pdp | max.poll.interval.ms = 300000 17:04:40 policy-clamp-runtime-acm | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"751ed6c0-ea59-4009-9448-d8d54dacf1ac","timestamp":"2024-02-16T17:03:47.116528928Z"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.836+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 17:04:40 policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[{"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","deployState":"UNDEPLOYED","lockState":"NONE","elements":[{"automationCompositionElementId":"709c62b3-8918-41b9-a747-d21eb79c6c20","deployState":"DEPLOYING","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{}}]}],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"9248b3f5-302a-4c92-b6c4-812b252c6967","timestamp":"2024-02-16T17:03:57.674696111Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:54.117+00:00|INFO|GsonMessageBodyHandler|pool-4-thread-2] Using GSON for REST calls 17:04:40 policy-pap | metric.reporters = [] 17:04:40 kafka | [2024-02-16 17:02:33,210] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 17:04:40 policy-apex-pdp | max.poll.records = 500 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.216+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:57.711+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:55.936+00:00|INFO|AutomationCompositionElementHandler|pool-4-thread-2] Found Policies in automation composition definition: NULL , Creating Policies 17:04:40 policy-pap | metrics.num.samples = 2 17:04:40 kafka | [2024-02-16 17:02:33,295] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 17:04:40 policy-apex-pdp | metadata.max.age.ms = 300000 17:04:40 policy-clamp-runtime-acm | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"751ed6c0-ea59-4009-9448-d8d54dacf1ac","timestamp":"2024-02-16T17:03:47.116528928Z"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.837+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:57.731+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:56.515+00:00|INFO|AutomationCompositionElementHandler|pool-4-thread-2] PolicyTypes/Policies for the automation composition element : 709c62b3-8918-41b9-a747-d21eb79c6c20 are created successfully 17:04:40 policy-pap | metrics.recording.level = INFO 17:04:40 kafka | [2024-02-16 17:02:33,305] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-apex-pdp | metric.reporters = [] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.218+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS_REQ 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.842+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-http-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DEPLOYED","lockState":"LOCKED","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deployed"}},"responseTo":"46590b55-0c49-46ae-b243-90cfb0a03d4c","result":true,"stateChangeResult":"NO_ERROR","message":"Deployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:56.540+00:00|INFO|GsonMessageBodyHandler|pool-4-thread-2] Using GSON for REST calls 17:04:40 policy-pap | metrics.sample.window.ms = 30000 17:04:40 kafka | [2024-02-16 17:02:33,314] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-apex-pdp | metrics.num.samples = 2 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.255+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:03:57.731+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:56.542+00:00|INFO|GsonMessageBodyHandler|pool-4-thread-2] Using GSON for REST calls 17:04:40 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 kafka | [2024-02-16 17:02:33,315] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-apex-pdp | metrics.recording.level = INFO 17:04:40 policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"939773c5-cc6c-46b4-b31a-65f7c2af01e5","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"b2d45c0a-d1ba-4949-b418-95d49ee361f7","timestamp":"2024-02-16T17:03:47.198657381Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:47.842+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:21.336+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:57.674+00:00|INFO|AutomationCompositionElementHandler|pool-4-thread-2] Policies deployed to 709c62b3-8918-41b9-a747-d21eb79c6c20 successfully 17:04:40 policy-pap | receive.buffer.bytes = 65536 17:04:40 kafka | [2024-02-16 17:02:33,318] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 17:04:40 policy-apex-pdp | metrics.sample.window.ms = 30000 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.318+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:53.696+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 policy-clamp-ac-http-ppnt | {"deployOrderedState":"UNDEPLOY","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","timestamp":"2024-02-16T17:04:21.322183159Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:57.675+00:00|INFO|network|pool-4-thread-2] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | reconnect.backoff.max.ms = 1000 17:04:40 kafka | [2024-02-16 17:02:33,332] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:40 policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"29e7ea94-9dca-43f0-a2b0-3f661708aa9f","timestamp":"2024-02-16T17:03:47.211534907Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | {"participantUpdatesList":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","acElementList":[{"id":"709c62b3-8918-41b9-a747-d21eb79c6c20","definition":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"orderedState":"DEPLOY","toscaServiceTemplateFragment":{"data_types":{"onap.datatypes.ToscaConceptIdentifier":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","required":true},"version":{"name":"version","type":"string","type_version":"0.0.0","required":true}},"name":"onap.datatypes.ToscaConceptIdentifier","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EngineService":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the engine name","default":"ApexEngineService","required":false},"version":{"name":"version","type":"string","type_version":"0.0.0","description":"Specifies the engine version in double dotted format","default":"1.0.0","required":false},"id":{"name":"id","type":"integer","type_version":"0.0.0","description":"Specifies the engine id","required":true},"instance_count":{"name":"instance_count","type":"integer","type_version":"0.0.0","description":"Specifies the number of engine threads that should be run","required":true},"deployment_port":{"name":"deployment_port","type":"integer","type_version":"0.0.0","description":"Specifies the port to connect to for engine administration","default":1.0,"required":false},"policy_model_file_name":{"name":"policy_model_file_name","type":"string","type_version":"0.0.0","description":"The name of the file from which to read the APEX policy model","required":false},"policy_type_impl":{"name":"policy_type_impl","type":"string","type_version":"0.0.0","description":"The policy type implementation from which to read the APEX policy model","required":false},"periodic_event_period":{"name":"periodic_event_period","type":"string","type_version":"0.0.0","description":"The time interval in milliseconds for the periodic scanning event, 0 means don't scan","required":false},"engine":{"name":"engine","type":"onap.datatypes.native.apex.engineservice.Engine","type_version":"0.0.0","description":"The parameters for all engines in the APEX engine service","required":true}},"name":"onap.datatypes.native.apex.EngineService","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventHandler":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the event handler name, if not specified this is set to the key name","required":false},"carrier_technology":{"name":"carrier_technology","type":"onap.datatypes.native.apex.CarrierTechnology","type_version":"0.0.0","description":"Specifies the carrier technology of the event handler (such as REST/Web Socket/Kafka)","required":true},"event_protocol":{"name":"event_protocol","type":"onap.datatypes.native.apex.EventProtocol","type_version":"0.0.0","description":"Specifies the event protocol of events for the event handler (such as Yaml/JSON/XML/POJO)","required":true},"event_name":{"name":"event_name","type":"string","type_version":"0.0.0","description":"Specifies the event name for events on this event handler, if not specified, the event name is read from or written to the event being received or sent","required":false},"event_name_filter":{"name":"event_name_filter","type":"string","type_version":"0.0.0","description":"Specifies a filter as a regular expression, events that do not match the filter are dropped, the default is to let all events through","required":false},"synchronous_mode":{"name":"synchronous_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is syncronous (receive event and send response)","default":false,"required":false},"synchronous_peer":{"name":"synchronous_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in synchronous mode, this parameter is mandatory if the event handler is in synchronous mode","required":false},"synchronous_timeout":{"name":"synchronous_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for responses to be issued by APEX torequests, this parameter is mandatory if the event handler is in synchronous mode","required":false},"requestor_mode":{"name":"requestor_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is in requestor mode (send event and wait for response mode)","default":false,"required":false},"requestor_peer":{"name":"requestor_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in requestor mode, this parameter is mandatory if the event handler is in requestor mode","required":false},"requestor_timeout":{"name":"requestor_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for wait for responses to requests, this parameter is mandatory if the event handler is in requestor mode","required":false}},"name":"onap.datatypes.native.apex.EventHandler","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.CarrierTechnology":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the carrier technology (such as REST, Kafka, WebSocket)","required":true},"plugin_parameter_class_name":{"name":"plugin_parameter_class_name","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of event input or output for this carrier technology, defaults to the supplied input or output class","required":false}},"name":"onap.datatypes.native.apex.CarrierTechnology","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventProtocol":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the event protocol (such as Yaml, JSON, XML, or POJO)","required":true},"event_protocol_plugin_class":{"name":"event_protocol_plugin_class","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of the event protocol for this carrier technology, defaults to the supplied event protocol class","required":false}},"name":"onap.datatypes.native.apex.EventProtocol","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Environment":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the environment variable","required":true},"value":{"name":"value","type":"string","type_version":"0.0.0","description":"The value of the environment variable","required":true}},"name":"onap.datatypes.native.apex.Environment","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.Engine":{"properties":{"context":{"name":"context","type":"onap.datatypes.native.apex.engineservice.engine.Context","type_version":"0.0.0","description":"The properties for handling context in APEX engines, defaults to using Java maps for context","required":false},"executors":{"name":"executors","type":"map","type_version":"0.0.0","description":"The plugins for policy executors used in engines such as javascript, MVEL, Jython","required":true,"entry_schema":{"type":"string","type_version":"0.0.0","description":"The plugin class path for this policy executor"}}},"name":"onap.datatypes.native.apex.engineservice.Engine","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.engine.Context":{"properties":{"distributor":{"name":"distributor","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for distributing context between APEX PDPs at runtime","required":false},"schemas":{"name":"schemas","type":"map","type_version":"0.0.0","description":"The plugins for context schemas available in APEX PDPs such as Java and Avro","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0"}},"locking":{"name":"locking","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for locking context in and between APEX PDPs at runtime","required":false},"persistence":{"name":"persistence","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for persisting context for APEX PDPs at runtime","required":false}},"name":"onap.datatypes.native.apex.engineservice.engine.Context","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Plugin":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the executor such as Javascript, Jython or MVEL","required":true},"plugin_class_name":{"name":"plugin_class_name","type":"string","type_version":"0.0.0","description":"The class path of the plugin class for this executor","required":false}},"name":"onap.datatypes.native.apex.Plugin","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest":{"properties":{"restRequestId":{"name":"restRequestId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a REST request to be sent to a REST endpoint","required":true},"httpMethod":{"name":"httpMethod","type":"string","type_version":"0.0.0","description":"The REST method to use","required":true,"constraints":[{"valid_values":["POST","PUT","GET","DELETE"]}]},"path":{"name":"path","type":"string","type_version":"0.0.0","description":"The path of the REST request relative to the base URL","required":true},"body":{"name":"body","type":"string","type_version":"0.0.0","description":"The body of the REST request for PUT and POST requests","required":false},"expectedResponse":{"name":"expectedResponse","type":"integer","type_version":"0.0.0","description":"THe expected HTTP status code for the REST request","required":true,"constraints":[]}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity":{"properties":{"configurationEntityId":{"name":"configurationEntityId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a Configuration Entity to be handled by the HTTP Automation Composition Element","required":true},"restSequence":{"name":"restSequence","type":"list","type_version":"0.0.0","description":"A sequence of REST commands to send to the REST endpoint","required":false,"entry_schema":{"type":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","type_version":"1.0.0"}}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}}},"policy_types":{"onap.policies.Native":{"name":"onap.policies.Native","version":"1.0.0","derived_from":"tosca.policies.Root","metadata":{},"description":"a base policy type for all native PDP policies"},"onap.policies.native.Apex":{"properties":{"engine_service":{"name":"engine_service","type":"onap.datatypes.native.apex.EngineService","type_version":"0.0.0","description":"APEX Engine Service Parameters","required":false},"inputs":{"name":"inputs","type":"map","type_version":"0.0.0","description":"Inputs for handling events coming into the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"outputs":{"name":"outputs","type":"map","type_version":"0.0.0","description":"Outputs for handling events going out of the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"environment":{"name":"environment","type":"list","type_version":"0.0.0","description":"Envioronmental parameters for the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Environment","type_version":"0.0.0"}}},"name":"onap.policies.native.Apex","version":"1.0.0","derived_from":"onap.policies.Native","metadata":{},"description":"a policy type for native apex policies"}},"topology_template":{"policies":[{"onap.policies.native.apex.ac.element":{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}}]},"name":"NULL","version":"0.0.0"},"properties":{"policy_type_id":{"name":"onap.policies.native.Apex","version":"1.0.0"},"policy_id":{"get_input":"acm_element_policy"}}}]}],"startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_DEPLOY","messageId":"46590b55-0c49-46ae-b243-90cfb0a03d4c","timestamp":"2024-02-16T17:03:53.679413689Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:22.415+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[{"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","deployState":"UNDEPLOYED","lockState":"NONE","elements":[{"automationCompositionElementId":"709c62b3-8918-41b9-a747-d21eb79c6c20","deployState":"DEPLOYING","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{}}]}],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"9248b3f5-302a-4c92-b6c4-812b252c6967","timestamp":"2024-02-16T17:03:57.674696111Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-pap | reconnect.backoff.ms = 50 17:04:40 kafka | [2024-02-16 17:02:33,343] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 17:04:40 policy-apex-pdp | receive.buffer.bytes = 65536 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.333+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:57.699+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"UNDEPLOYED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Undeployed"}},"responseTo":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","result":true,"stateChangeResult":"NO_ERROR","message":"Undeployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:57.696+00:00|INFO|ParticipantMessagePublisher|pool-4-thread-2] Sent Participant Status message to CLAMP - ParticipantStatus(super=ParticipantMessage(messageType=PARTICIPANT_STATUS, messageId=9248b3f5-302a-4c92-b6c4-812b252c6967, timestamp=2024-02-16T17:03:57.674696111Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c03, automationCompositionId=null, compositionId=715407e5-17b4-40bf-9633-c1ca5735224f), state=ON_LINE, participantDefinitionUpdates=[], automationCompositionInfoList=[AutomationCompositionInfo(automationCompositionId=5f8b554f-0760-497d-900b-f38674e2d074, deployState=UNDEPLOYED, lockState=NONE, elements=[AutomationCompositionElementInfo(automationCompositionElementId=709c62b3-8918-41b9-a747-d21eb79c6c20, deployState=DEPLOYING, lockState=NONE, operationalState=ENABLED, useState=IDLE, outProperties={})])], participantSupportedElementType=[ParticipantSupportedElementType(id=0b8ba591-6c02-4faf-8911-f6ce37e044af, typeName=org.onap.policy.clamp.acm.PolicyAutomationCompositionElement, typeVersion=1.0.0)]) 17:04:40 policy-pap | request.timeout.ms = 30000 17:04:40 kafka | [2024-02-16 17:02:33,343] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:04:40 policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"c5bce600-be93-48d6-9321-677a64168aee","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"757abd77-5d53-4c4b-b040-1abc1768cd48","timestamp":"2024-02-16T17:03:47.217554434Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[{"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","deployState":"UNDEPLOYED","lockState":"NONE","elements":[{"automationCompositionElementId":"709c62b3-8918-41b9-a747-d21eb79c6c20","deployState":"DEPLOYING","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{}}]}],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"9248b3f5-302a-4c92-b6c4-812b252c6967","timestamp":"2024-02-16T17:03:57.674696111Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:22.416+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-pap | retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:57.711+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | reconnect.backoff.ms = 50 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.347+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | > upgrade 0730-toscaproperty.sql 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:57.699+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.704+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.client.callback.handler.class = null 17:04:40 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[{"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","deployState":"UNDEPLOYED","lockState":"NONE","elements":[{"automationCompositionElementId":"709c62b3-8918-41b9-a747-d21eb79c6c20","deployState":"DEPLOYING","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{}}]}],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"9248b3f5-302a-4c92-b6c4-812b252c6967","timestamp":"2024-02-16T17:03:57.674696111Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-apex-pdp | request.timeout.ms = 30000 17:04:40 policy-apex-pdp | retry.backoff.ms = 100 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:57.727+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | {"deployOrderedState":"DELETE","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","timestamp":"2024-02-16T17:04:26.693700299Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-pap | sasl.jaas.config = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:57.716+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS 17:04:40 policy-apex-pdp | sasl.client.callback.handler.class = null 17:04:40 kafka | [2024-02-16 17:02:33,349] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-clamp-ac-sim-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DEPLOYED","lockState":"LOCKED","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deployed"}},"responseTo":"46590b55-0c49-46ae-b243-90cfb0a03d4c","result":true,"stateChangeResult":"NO_ERROR","message":"Deployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.707+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:57.718+00:00|INFO|network|pool-4-thread-2] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"96f16c87-93d1-40e8-89e7-ca9ee0be53f1","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"90887c49-7ec2-421b-9586-f22755afb378","timestamp":"2024-02-16T17:03:47.203750282Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-apex-pdp | sasl.jaas.config = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:03:57.727+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-ac-http-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DEPLOYED","lockState":"LOCKED","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deployed"}},"responseTo":"46590b55-0c49-46ae-b243-90cfb0a03d4c","result":true,"stateChangeResult":"NO_ERROR","message":"Deployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.715+00:00|INFO|network|pool-5-thread-1] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 kafka | [2024-02-16 17:02:33,352] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:21.332+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.730+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.kerberos.service.name = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:57.738+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DEPLOYED","lockState":"LOCKED","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deployed"}},"responseTo":"46590b55-0c49-46ae-b243-90cfb0a03d4c","result":true,"stateChangeResult":"NO_ERROR","message":"Deployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | {"participantDefinitionUpdates":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"Ericsson","startPhase":0},"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the operational policy for Performance Management Subscription Handling"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Starter"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Bridge"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Sink"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Starter microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Bridge microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Sink microservice"},"outProperties":{}}]}],"messageType":"PARTICIPANT_PRIME","messageId":"fc518aed-3741-43ec-b597-0cd9ccf000cb","timestamp":"2024-02-16T17:03:47.714617694Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 kafka | [2024-02-16 17:02:33,375] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | {"deployOrderedState":"UNDEPLOY","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","timestamp":"2024-02-16T17:04:21.322183159Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-http-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DELETED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deleted"}},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Deleted","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:03:57.738+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.773+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,381] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 17:04:40 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:22.416+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.732+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-apex-pdp | sasl.kerberos.service.name = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:21.339+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | {"participantDefinitionUpdates":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"Ericsson","startPhase":0},"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the operational policy for Performance Management Subscription Handling"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Starter"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Bridge"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","startPhase":0,"uninitializedToPassiveTimeout":300,"podStatusCheckInterval":30},"name":"onap.policy.clamp.ac.element.K8S_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the K8S microservice for AC Element Sink"},"outProperties":{}}]},{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","automationCompositionElementDefinitionList":[{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_StarterAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Starter microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_BridgeAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Bridge microservice"},"outProperties":{}},{"acElementDefinitionId":{"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3"},"automationCompositionElementToscaNodeTemplate":{"type":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","type_version":"1.0.0","properties":{"provider":"ONAP","uninitializedToPassiveTimeout":300,"startPhase":1},"name":"onap.policy.clamp.ac.element.Http_SinkAutomationCompositionElement","version":"1.2.3","metadata":{},"description":"Automation composition element for the http requests of AC Element Sink microservice"},"outProperties":{}}]}],"messageType":"PARTICIPANT_PRIME","messageId":"fc518aed-3741-43ec-b597-0cd9ccf000cb","timestamp":"2024-02-16T17:03:47.714617694Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 kafka | [2024-02-16 17:02:33,382] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"UNDEPLOYED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Undeployed"}},"responseTo":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","result":true,"stateChangeResult":"NO_ERROR","message":"Undeployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.741+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.login.callback.handler.class = null 17:04:40 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-clamp-ac-pf-ppnt | {"deployOrderedState":"UNDEPLOY","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","timestamp":"2024-02-16T17:04:21.322183159Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.774+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME 17:04:40 kafka | [2024-02-16 17:02:33,383] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:22.417+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-ac-http-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-pap | sasl.login.class = null 17:04:40 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:22.403+00:00|INFO|network|pool-4-thread-3] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:47.774+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,382] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.717+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.741+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-pap | sasl.login.connect.timeout.ms = null 17:04:40 policy-apex-pdp | sasl.login.callback.handler.class = null 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"UNDEPLOYED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Undeployed"}},"responseTo":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","result":true,"stateChangeResult":"NO_ERROR","message":"Undeployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 kafka | [2024-02-16 17:02:33,390] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | {"deployOrderedState":"DELETE","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","timestamp":"2024-02-16T17:04:26.693700299Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.741+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.login.read.timeout.ms = null 17:04:40 policy-apex-pdp | sasl.login.class = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:22.415+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:48.032+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,397] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.720+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"UNDEPLOYED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Undeployed"}},"responseTo":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","result":true,"stateChangeResult":"NO_ERROR","message":"Undeployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-runtime-acm | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 kafka | [2024-02-16 17:02:33,400] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 17:04:40 policy-clamp-ac-sim-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.741+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:22.415+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:48.185+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,416] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.736+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.744+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.704+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | {"compositionState":"PRIMED","responseTo":"fc518aed-3741-43ec-b597-0cd9ccf000cb","result":true,"stateChangeResult":"NO_ERROR","message":"Primed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 kafka | [2024-02-16 17:02:33,422] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 17:04:40 policy-clamp-ac-sim-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DELETED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deleted"}},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Deleted","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-http-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-clamp-ac-pf-ppnt | {"deployOrderedState":"DELETE","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","timestamp":"2024-02-16T17:04:26.693700299Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:53.682+00:00|INFO|network|pool-4-thread-1] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,429] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.737+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.745+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.707+00:00|INFO|network|pool-4-thread-4] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,439] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:40 policy-clamp-runtime-acm | {"participantUpdatesList":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","acElementList":[{"id":"709c62b3-8918-41b9-a747-d21eb79c6c20","definition":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"orderedState":"DEPLOY","toscaServiceTemplateFragment":{"data_types":{"onap.datatypes.ToscaConceptIdentifier":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","required":true},"version":{"name":"version","type":"string","type_version":"0.0.0","required":true}},"name":"onap.datatypes.ToscaConceptIdentifier","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EngineService":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the engine name","default":"ApexEngineService","required":false},"version":{"name":"version","type":"string","type_version":"0.0.0","description":"Specifies the engine version in double dotted format","default":"1.0.0","required":false},"id":{"name":"id","type":"integer","type_version":"0.0.0","description":"Specifies the engine id","required":true},"instance_count":{"name":"instance_count","type":"integer","type_version":"0.0.0","description":"Specifies the number of engine threads that should be run","required":true},"deployment_port":{"name":"deployment_port","type":"integer","type_version":"0.0.0","description":"Specifies the port to connect to for engine administration","default":1.0,"required":false},"policy_model_file_name":{"name":"policy_model_file_name","type":"string","type_version":"0.0.0","description":"The name of the file from which to read the APEX policy model","required":false},"policy_type_impl":{"name":"policy_type_impl","type":"string","type_version":"0.0.0","description":"The policy type implementation from which to read the APEX policy model","required":false},"periodic_event_period":{"name":"periodic_event_period","type":"string","type_version":"0.0.0","description":"The time interval in milliseconds for the periodic scanning event, 0 means don't scan","required":false},"engine":{"name":"engine","type":"onap.datatypes.native.apex.engineservice.Engine","type_version":"0.0.0","description":"The parameters for all engines in the APEX engine service","required":true}},"name":"onap.datatypes.native.apex.EngineService","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventHandler":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the event handler name, if not specified this is set to the key name","required":false},"carrier_technology":{"name":"carrier_technology","type":"onap.datatypes.native.apex.CarrierTechnology","type_version":"0.0.0","description":"Specifies the carrier technology of the event handler (such as REST/Web Socket/Kafka)","required":true},"event_protocol":{"name":"event_protocol","type":"onap.datatypes.native.apex.EventProtocol","type_version":"0.0.0","description":"Specifies the event protocol of events for the event handler (such as Yaml/JSON/XML/POJO)","required":true},"event_name":{"name":"event_name","type":"string","type_version":"0.0.0","description":"Specifies the event name for events on this event handler, if not specified, the event name is read from or written to the event being received or sent","required":false},"event_name_filter":{"name":"event_name_filter","type":"string","type_version":"0.0.0","description":"Specifies a filter as a regular expression, events that do not match the filter are dropped, the default is to let all events through","required":false},"synchronous_mode":{"name":"synchronous_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is syncronous (receive event and send response)","default":false,"required":false},"synchronous_peer":{"name":"synchronous_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in synchronous mode, this parameter is mandatory if the event handler is in synchronous mode","required":false},"synchronous_timeout":{"name":"synchronous_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for responses to be issued by APEX torequests, this parameter is mandatory if the event handler is in synchronous mode","required":false},"requestor_mode":{"name":"requestor_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is in requestor mode (send event and wait for response mode)","default":false,"required":false},"requestor_peer":{"name":"requestor_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in requestor mode, this parameter is mandatory if the event handler is in requestor mode","required":false},"requestor_timeout":{"name":"requestor_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for wait for responses to requests, this parameter is mandatory if the event handler is in requestor mode","required":false}},"name":"onap.datatypes.native.apex.EventHandler","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.CarrierTechnology":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the carrier technology (such as REST, Kafka, WebSocket)","required":true},"plugin_parameter_class_name":{"name":"plugin_parameter_class_name","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of event input or output for this carrier technology, defaults to the supplied input or output class","required":false}},"name":"onap.datatypes.native.apex.CarrierTechnology","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventProtocol":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the event protocol (such as Yaml, JSON, XML, or POJO)","required":true},"event_protocol_plugin_class":{"name":"event_protocol_plugin_class","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of the event protocol for this carrier technology, defaults to the supplied event protocol class","required":false}},"name":"onap.datatypes.native.apex.EventProtocol","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Environment":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the environment variable","required":true},"value":{"name":"value","type":"string","type_version":"0.0.0","description":"The value of the environment variable","required":true}},"name":"onap.datatypes.native.apex.Environment","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.Engine":{"properties":{"context":{"name":"context","type":"onap.datatypes.native.apex.engineservice.engine.Context","type_version":"0.0.0","description":"The properties for handling context in APEX engines, defaults to using Java maps for context","required":false},"executors":{"name":"executors","type":"map","type_version":"0.0.0","description":"The plugins for policy executors used in engines such as javascript, MVEL, Jython","required":true,"entry_schema":{"type":"string","type_version":"0.0.0","description":"The plugin class path for this policy executor"}}},"name":"onap.datatypes.native.apex.engineservice.Engine","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.engine.Context":{"properties":{"distributor":{"name":"distributor","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for distributing context between APEX PDPs at runtime","required":false},"schemas":{"name":"schemas","type":"map","type_version":"0.0.0","description":"The plugins for context schemas available in APEX PDPs such as Java and Avro","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0"}},"locking":{"name":"locking","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for locking context in and between APEX PDPs at runtime","required":false},"persistence":{"name":"persistence","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for persisting context for APEX PDPs at runtime","required":false}},"name":"onap.datatypes.native.apex.engineservice.engine.Context","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Plugin":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the executor such as Javascript, Jython or MVEL","required":true},"plugin_class_name":{"name":"plugin_class_name","type":"string","type_version":"0.0.0","description":"The class path of the plugin class for this executor","required":false}},"name":"onap.datatypes.native.apex.Plugin","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest":{"properties":{"restRequestId":{"name":"restRequestId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a REST request to be sent to a REST endpoint","required":true},"httpMethod":{"name":"httpMethod","type":"string","type_version":"0.0.0","description":"The REST method to use","required":true,"constraints":[{"valid_values":["POST","PUT","GET","DELETE"]}]},"path":{"name":"path","type":"string","type_version":"0.0.0","description":"The path of the REST request relative to the base URL","required":true},"body":{"name":"body","type":"string","type_version":"0.0.0","description":"The body of the REST request for PUT and POST requests","required":false},"expectedResponse":{"name":"expectedResponse","type":"integer","type_version":"0.0.0","description":"THe expected HTTP status code for the REST request","required":true,"constraints":[]}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity":{"properties":{"configurationEntityId":{"name":"configurationEntityId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a Configuration Entity to be handled by the HTTP Automation Composition Element","required":true},"restSequence":{"name":"restSequence","type":"list","type_version":"0.0.0","description":"A sequence of REST commands to send to the REST endpoint","required":false,"entry_schema":{"type":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","type_version":"1.0.0"}}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}}},"policy_types":{"onap.policies.Native":{"name":"onap.policies.Native","version":"1.0.0","derived_from":"tosca.policies.Root","metadata":{},"description":"a base policy type for all native PDP policies"},"onap.policies.native.Apex":{"properties":{"engine_service":{"name":"engine_service","type":"onap.datatypes.native.apex.EngineService","type_version":"0.0.0","description":"APEX Engine Service Parameters","required":false},"inputs":{"name":"inputs","type":"map","type_version":"0.0.0","description":"Inputs for handling events coming into the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"outputs":{"name":"outputs","type":"map","type_version":"0.0.0","description":"Outputs for handling events going out of the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"environment":{"name":"environment","type":"list","type_version":"0.0.0","description":"Envioronmental parameters for the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Environment","type_version":"0.0.0"}}},"name":"onap.policies.native.Apex","version":"1.0.0","derived_from":"onap.policies.Native","metadata":{},"description":"a policy type for native apex policies"}},"topology_template":{"policies":[{"onap.policies.native.apex.ac.element":{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}}]},"name":"NULL","version":"0.0.0"},"properties":{"policy_type_id":{"name":"onap.policies.native.Apex","version":"1.0.0"},"policy_id":{"get_input":"acm_element_policy"}}}]}],"startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_DEPLOY","messageId":"46590b55-0c49-46ae-b243-90cfb0a03d4c","timestamp":"2024-02-16T17:03:53.679413689Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.737+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.976+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:40 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DELETED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deleted"}},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Deleted","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 kafka | [2024-02-16 17:02:33,440] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:53.722+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-sim-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-clamp-ac-http-ppnt | {"messageType":"PARTICIPANT_PRIME","messageId":"06743202-529d-44dd-aee6-94cbebea181c","timestamp":"2024-02-16T17:04:26.969103751Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-pap | sasl.mechanism = GSSAPI 17:04:40 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.723+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,443] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 17:04:40 policy-clamp-runtime-acm | {"participantUpdatesList":[{"participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","acElementList":[{"id":"709c62b3-8918-41b9-a747-d21eb79c6c20","definition":{"name":"onap.policy.clamp.ac.element.Policy_AutomationCompositionElement","version":"1.2.3"},"orderedState":"DEPLOY","toscaServiceTemplateFragment":{"data_types":{"onap.datatypes.ToscaConceptIdentifier":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","required":true},"version":{"name":"version","type":"string","type_version":"0.0.0","required":true}},"name":"onap.datatypes.ToscaConceptIdentifier","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EngineService":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the engine name","default":"ApexEngineService","required":false},"version":{"name":"version","type":"string","type_version":"0.0.0","description":"Specifies the engine version in double dotted format","default":"1.0.0","required":false},"id":{"name":"id","type":"integer","type_version":"0.0.0","description":"Specifies the engine id","required":true},"instance_count":{"name":"instance_count","type":"integer","type_version":"0.0.0","description":"Specifies the number of engine threads that should be run","required":true},"deployment_port":{"name":"deployment_port","type":"integer","type_version":"0.0.0","description":"Specifies the port to connect to for engine administration","default":1.0,"required":false},"policy_model_file_name":{"name":"policy_model_file_name","type":"string","type_version":"0.0.0","description":"The name of the file from which to read the APEX policy model","required":false},"policy_type_impl":{"name":"policy_type_impl","type":"string","type_version":"0.0.0","description":"The policy type implementation from which to read the APEX policy model","required":false},"periodic_event_period":{"name":"periodic_event_period","type":"string","type_version":"0.0.0","description":"The time interval in milliseconds for the periodic scanning event, 0 means don't scan","required":false},"engine":{"name":"engine","type":"onap.datatypes.native.apex.engineservice.Engine","type_version":"0.0.0","description":"The parameters for all engines in the APEX engine service","required":true}},"name":"onap.datatypes.native.apex.EngineService","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventHandler":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"Specifies the event handler name, if not specified this is set to the key name","required":false},"carrier_technology":{"name":"carrier_technology","type":"onap.datatypes.native.apex.CarrierTechnology","type_version":"0.0.0","description":"Specifies the carrier technology of the event handler (such as REST/Web Socket/Kafka)","required":true},"event_protocol":{"name":"event_protocol","type":"onap.datatypes.native.apex.EventProtocol","type_version":"0.0.0","description":"Specifies the event protocol of events for the event handler (such as Yaml/JSON/XML/POJO)","required":true},"event_name":{"name":"event_name","type":"string","type_version":"0.0.0","description":"Specifies the event name for events on this event handler, if not specified, the event name is read from or written to the event being received or sent","required":false},"event_name_filter":{"name":"event_name_filter","type":"string","type_version":"0.0.0","description":"Specifies a filter as a regular expression, events that do not match the filter are dropped, the default is to let all events through","required":false},"synchronous_mode":{"name":"synchronous_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is syncronous (receive event and send response)","default":false,"required":false},"synchronous_peer":{"name":"synchronous_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in synchronous mode, this parameter is mandatory if the event handler is in synchronous mode","required":false},"synchronous_timeout":{"name":"synchronous_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for responses to be issued by APEX torequests, this parameter is mandatory if the event handler is in synchronous mode","required":false},"requestor_mode":{"name":"requestor_mode","type":"boolean","type_version":"0.0.0","description":"Specifies the event handler is in requestor mode (send event and wait for response mode)","default":false,"required":false},"requestor_peer":{"name":"requestor_peer","type":"string","type_version":"0.0.0","description":"The peer event handler (output for input or input for output) of this event handler in requestor mode, this parameter is mandatory if the event handler is in requestor mode","required":false},"requestor_timeout":{"name":"requestor_timeout","type":"integer","type_version":"0.0.0","description":"The timeout in milliseconds for wait for responses to requests, this parameter is mandatory if the event handler is in requestor mode","required":false}},"name":"onap.datatypes.native.apex.EventHandler","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.CarrierTechnology":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the carrier technology (such as REST, Kafka, WebSocket)","required":true},"plugin_parameter_class_name":{"name":"plugin_parameter_class_name","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of event input or output for this carrier technology, defaults to the supplied input or output class","required":false}},"name":"onap.datatypes.native.apex.CarrierTechnology","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.EventProtocol":{"properties":{"label":{"name":"label","type":"string","type_version":"0.0.0","description":"The label (name) of the event protocol (such as Yaml, JSON, XML, or POJO)","required":true},"event_protocol_plugin_class":{"name":"event_protocol_plugin_class","type":"string","type_version":"0.0.0","description":"The class name of the class that overrides default handling of the event protocol for this carrier technology, defaults to the supplied event protocol class","required":false}},"name":"onap.datatypes.native.apex.EventProtocol","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Environment":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the environment variable","required":true},"value":{"name":"value","type":"string","type_version":"0.0.0","description":"The value of the environment variable","required":true}},"name":"onap.datatypes.native.apex.Environment","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.Engine":{"properties":{"context":{"name":"context","type":"onap.datatypes.native.apex.engineservice.engine.Context","type_version":"0.0.0","description":"The properties for handling context in APEX engines, defaults to using Java maps for context","required":false},"executors":{"name":"executors","type":"map","type_version":"0.0.0","description":"The plugins for policy executors used in engines such as javascript, MVEL, Jython","required":true,"entry_schema":{"type":"string","type_version":"0.0.0","description":"The plugin class path for this policy executor"}}},"name":"onap.datatypes.native.apex.engineservice.Engine","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.engineservice.engine.Context":{"properties":{"distributor":{"name":"distributor","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for distributing context between APEX PDPs at runtime","required":false},"schemas":{"name":"schemas","type":"map","type_version":"0.0.0","description":"The plugins for context schemas available in APEX PDPs such as Java and Avro","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0"}},"locking":{"name":"locking","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for locking context in and between APEX PDPs at runtime","required":false},"persistence":{"name":"persistence","type":"onap.datatypes.native.apex.Plugin","type_version":"0.0.0","description":"The plugin to be used for persisting context for APEX PDPs at runtime","required":false}},"name":"onap.datatypes.native.apex.engineservice.engine.Context","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"onap.datatypes.native.apex.Plugin":{"properties":{"name":{"name":"name","type":"string","type_version":"0.0.0","description":"The name of the executor such as Javascript, Jython or MVEL","required":true},"plugin_class_name":{"name":"plugin_class_name","type":"string","type_version":"0.0.0","description":"The class path of the plugin class for this executor","required":false}},"name":"onap.datatypes.native.apex.Plugin","version":"0.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest":{"properties":{"restRequestId":{"name":"restRequestId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a REST request to be sent to a REST endpoint","required":true},"httpMethod":{"name":"httpMethod","type":"string","type_version":"0.0.0","description":"The REST method to use","required":true,"constraints":[{"valid_values":["POST","PUT","GET","DELETE"]}]},"path":{"name":"path","type":"string","type_version":"0.0.0","description":"The path of the REST request relative to the base URL","required":true},"body":{"name":"body","type":"string","type_version":"0.0.0","description":"The body of the REST request for PUT and POST requests","required":false},"expectedResponse":{"name":"expectedResponse","type":"integer","type_version":"0.0.0","description":"THe expected HTTP status code for the REST request","required":true,"constraints":[]}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}},"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity":{"properties":{"configurationEntityId":{"name":"configurationEntityId","type":"onap.datatypes.ToscaConceptIdentifier","type_version":"0.0.0","description":"The name and version of a Configuration Entity to be handled by the HTTP Automation Composition Element","required":true},"restSequence":{"name":"restSequence","type":"list","type_version":"0.0.0","description":"A sequence of REST commands to send to the REST endpoint","required":false,"entry_schema":{"type":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.RestRequest","type_version":"1.0.0"}}},"name":"org.onap.datatypes.policy.clamp.acm.httpAutomationCompositionElement.ConfigurationEntity","version":"1.0.0","derived_from":"tosca.datatypes.Root","metadata":{}}},"policy_types":{"onap.policies.Native":{"name":"onap.policies.Native","version":"1.0.0","derived_from":"tosca.policies.Root","metadata":{},"description":"a base policy type for all native PDP policies"},"onap.policies.native.Apex":{"properties":{"engine_service":{"name":"engine_service","type":"onap.datatypes.native.apex.EngineService","type_version":"0.0.0","description":"APEX Engine Service Parameters","required":false},"inputs":{"name":"inputs","type":"map","type_version":"0.0.0","description":"Inputs for handling events coming into the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"outputs":{"name":"outputs","type":"map","type_version":"0.0.0","description":"Outputs for handling events going out of the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.EventHandler","type_version":"0.0.0"}},"environment":{"name":"environment","type":"list","type_version":"0.0.0","description":"Envioronmental parameters for the APEX engine","required":false,"entry_schema":{"type":"onap.datatypes.native.apex.Environment","type_version":"0.0.0"}}},"name":"onap.policies.native.Apex","version":"1.0.0","derived_from":"onap.policies.Native","metadata":{},"description":"a policy type for native apex policies"}},"topology_template":{"policies":[{"onap.policies.native.apex.ac.element":{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}}]},"name":"NULL","version":"0.0.0"},"properties":{"policy_type_id":{"name":"onap.policies.native.Apex","version":"1.0.0"},"policy_id":{"get_input":"acm_element_policy"}}}]}],"startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_DEPLOY","messageId":"46590b55-0c49-46ae-b243-90cfb0a03d4c","timestamp":"2024-02-16T17:03:53.679413689Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.737+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.990+00:00|INFO|network|pool-4-thread-2] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DELETED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deleted"}},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Deleted","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:33,443] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:53.726+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_DEPLOY 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.743+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:40 policy-apex-pdp | sasl.mechanism = GSSAPI 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.723+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 kafka | [2024-02-16 17:02:33,444] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:57.708+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.993+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.732+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:33,444] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[{"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","deployState":"UNDEPLOYED","lockState":"NONE","elements":[{"automationCompositionElementId":"709c62b3-8918-41b9-a747-d21eb79c6c20","deployState":"DEPLOYING","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{}}]}],"participantSupportedElementType":[{"id":"0b8ba591-6c02-4faf-8911-f6ce37e044af","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"9248b3f5-302a-4c92-b6c4-812b252c6967","timestamp":"2024-02-16T17:03:57.674696111Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.743+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:33,460] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:03:57.755+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.743+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:26.993+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.733+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:33,461] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DEPLOYED","lockState":"LOCKED","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deployed"}},"responseTo":"46590b55-0c49-46ae-b243-90cfb0a03d4c","result":true,"stateChangeResult":"NO_ERROR","message":"Deployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-clamp-ac-sim-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:27.002+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.738+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | > upgrade 0770-toscarequirement.sql 17:04:40 kafka | [2024-02-16 17:02:33,461] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:01.023+00:00|INFO|SupervisionAspect|scheduling-1] Add scheduled scanning 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.744+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:33,462] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:21.022+00:00|INFO|SupervisionAspect|scheduling-1] Add scheduled scanning 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.986+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:27.002+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.740+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 17:04:40 kafka | [2024-02-16 17:02:33,464] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:21.322+00:00|INFO|network|pool-4-thread-1] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | {"messageType":"PARTICIPANT_PRIME","messageId":"06743202-529d-44dd-aee6-94cbebea181c","timestamp":"2024-02-16T17:04:26.969103751Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:27.005+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.740+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:33,467] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 17:04:40 policy-clamp-runtime-acm | {"deployOrderedState":"UNDEPLOY","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","timestamp":"2024-02-16T17:04:21.322183159Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.991+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-pf-ppnt | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 kafka | [2024-02-16 17:02:33,475] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:21.334+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-sim-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-clamp-ac-http-ppnt | [2024-02-16T17:04:27.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-pap | security.protocol = PLAINTEXT 17:04:40 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.740+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATECHANGE_ACK 17:04:40 kafka | [2024-02-16 17:02:33,476] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 17:04:40 policy-clamp-runtime-acm | {"deployOrderedState":"UNDEPLOY","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","timestamp":"2024-02-16T17:04:21.322183159Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.992+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-pap | security.providers = null 17:04:40 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:04:40 policy-db-migrator | > upgrade 0780-toscarequirements.sql 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.979+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,479] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:21.334+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATE_CHANGE 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.997+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | send.buffer.bytes = 131072 17:04:40 policy-apex-pdp | security.protocol = PLAINTEXT 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-pf-ppnt | {"messageType":"PARTICIPANT_PRIME","messageId":"06743202-529d-44dd-aee6-94cbebea181c","timestamp":"2024-02-16T17:04:26.969103751Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 kafka | [2024-02-16 17:02:33,479] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 17:04:40 policy-clamp-ac-sim-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:22.415+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | session.timeout.ms = 45000 17:04:40 policy-apex-pdp | security.providers = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.981+00:00|INFO|network|pool-4-thread-5] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,480] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:26.997+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-clamp-runtime-acm | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"UNDEPLOYED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Undeployed"}},"responseTo":"5fd05feb-efa8-41d0-a56f-ea639fdaf1aa","result":true,"stateChangeResult":"NO_ERROR","message":"Undeployed","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-apex-pdp | send.buffer.bytes = 131072 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 kafka | [2024-02-16 17:02:33,480] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:27.004+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:26.693+00:00|INFO|network|pool-4-thread-1] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-apex-pdp | session.timeout.ms = 30000 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.991+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,480] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 17:04:40 policy-clamp-ac-sim-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-clamp-runtime-acm | {"deployOrderedState":"DELETE","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","timestamp":"2024-02-16T17:04:26.693700299Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-pap | ssl.cipher.suites = null 17:04:40 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:04:40 policy-db-migrator | 17:04:40 policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 kafka | [2024-02-16 17:02:33,483] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 17:04:40 policy-clamp-ac-sim-ppnt | [2024-02-16T17:04:27.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:26.717+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:04:40 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.991+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 kafka | [2024-02-16 17:02:33,483] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | {"deployOrderedState":"DELETE","lockOrderedState":"NONE","startPhase":0,"firstStartPhase":true,"messageType":"AUTOMATION_COMPOSITION_STATE_CHANGE","messageId":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","timestamp":"2024-02-16T17:04:26.693700299Z","automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:26.717+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type AUTOMATION_COMPOSITION_STATE_CHANGE 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.996+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 kafka | [2024-02-16 17:02:33,491] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | ssl.cipher.suites = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:26.730+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:26.996+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:27.005+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 kafka | [2024-02-16 17:02:33,492] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 policy-clamp-runtime-acm | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{"709c62b3-8918-41b9-a747-d21eb79c6c20":{"deployState":"DELETED","lockState":"NONE","operationalState":"ENABLED","useState":"IDLE","outProperties":{},"result":true,"message":"Deleted"}},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Deleted","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-clamp-ac-pf-ppnt | [2024-02-16T17:04:27.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK 17:04:40 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:26.764+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:40 policy-pap | ssl.engine.factory.class = null 17:04:40 policy-clamp-runtime-acm | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:33,492] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 17:04:40 policy-apex-pdp | ssl.engine.factory.class = null 17:04:40 policy-pap | ssl.key.password = null 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:26.775+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:33,492] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | ssl.key.password = null 17:04:40 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:40 policy-clamp-runtime-acm | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} 17:04:40 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:04:40 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:04:40 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 17:04:40 policy-pap | ssl.keystore.certificate.chain = null 17:04:40 kafka | [2024-02-16 17:02:33,493] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:26.782+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | ssl.keystore.key = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.keystore.key = null 17:04:40 kafka | [2024-02-16 17:02:33,494] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 17:04:40 policy-clamp-runtime-acm | {"automationCompositionId":"5f8b554f-0760-497d-900b-f38674e2d074","automationCompositionResultMap":{},"responseTo":"6ec7c980-19d1-4f12-89b9-892e3bbc5013","result":true,"stateChangeResult":"NO_ERROR","message":"Already deleted or never used","messageType":"AUTOMATION_COMPOSITION_STATECHANGE_ACK","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} 17:04:40 policy-apex-pdp | ssl.keystore.location = null 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 17:04:40 policy-pap | ssl.keystore.location = null 17:04:40 kafka | [2024-02-16 17:02:33,494] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:26.969+00:00|INFO|network|pool-5-thread-1] [OUT|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | ssl.keystore.password = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.keystore.password = null 17:04:40 policy-pap | ssl.keystore.type = JKS 17:04:40 policy-clamp-runtime-acm | {"messageType":"PARTICIPANT_PRIME","messageId":"06743202-529d-44dd-aee6-94cbebea181c","timestamp":"2024-02-16T17:04:26.969103751Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-apex-pdp | ssl.keystore.type = JKS 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.protocol = TLSv1.3 17:04:40 policy-pap | ssl.provider = null 17:04:40 policy-apex-pdp | ssl.protocol = TLSv1.3 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.secure.random.implementation = null 17:04:40 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:27.006+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | ssl.provider = null 17:04:40 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 17:04:40 policy-pap | ssl.truststore.certificates = null 17:04:40 policy-pap | ssl.truststore.location = null 17:04:40 policy-clamp-runtime-acm | {"messageType":"PARTICIPANT_PRIME","messageId":"06743202-529d-44dd-aee6-94cbebea181c","timestamp":"2024-02-16T17:04:26.969103751Z","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f"} 17:04:40 policy-apex-pdp | ssl.secure.random.implementation = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.truststore.password = null 17:04:40 policy-pap | ssl.truststore.type = JKS 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:27.007+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME 17:04:40 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-pap | 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:27.012+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | ssl.truststore.certificates = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:25.759+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-pap | [2024-02-16T17:03:25.759+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-clamp-runtime-acm | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} 17:04:40 policy-apex-pdp | ssl.truststore.location = null 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:25.759+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103005758 17:04:40 policy-pap | [2024-02-16T17:03:25.761+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:27.064+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | ssl.truststore.password = null 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:25.761+00:00|INFO|ServiceManager|main] Policy PAP starting topics 17:04:40 policy-pap | [2024-02-16T17:03:25.762+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=11f41433-eb08-4e90-84ba-1b4e7e546b71, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:04:40 policy-clamp-runtime-acm | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} 17:04:40 policy-apex-pdp | ssl.truststore.type = JKS 17:04:40 policy-db-migrator | > upgrade 0820-toscatrigger.sql 17:04:40 policy-pap | [2024-02-16T17:03:25.762+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=084a2e58-01c1-4612-9881-9e51d9ffa3ed, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:04:40 policy-pap | [2024-02-16T17:03:25.762+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=92462fcf-1a1d-4540-8555-9354aa93d05c, alive=false, publisher=null]]: starting 17:04:40 policy-clamp-runtime-acm | [2024-02-16T17:04:27.105+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] 17:04:40 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:25.782+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:40 policy-pap | acks = -1 17:04:40 policy-clamp-runtime-acm | {"compositionState":"COMMISSIONED","responseTo":"06743202-529d-44dd-aee6-94cbebea181c","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"715407e5-17b4-40bf-9633-c1ca5735224f","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} 17:04:40 policy-apex-pdp | 17:04:40 policy-pap | auto.include.jmx.reporter = true 17:04:40 policy-pap | batch.size = 16384 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.550+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | bootstrap.servers = [kafka:9092] 17:04:40 policy-pap | buffer.memory = 33554432 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[{"name":"onap.policies.native.apex.ac.element","version":"1.0.0"}],"response":{"responseTo":"01a3277e-772a-4c95-b79a-4ecc96f9bfb9","responseStatus":"SUCCESS","responseMessage":"Apex engine started. Deployed policies are: onap.policies.native.apex.ac.element:1.0.0 "},"messageName":"PDP_STATUS","requestId":"8c6bbbbd-f3ca-4794-ae5b-08b628aefb3f","timestampMs":1708103038549,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:40 policy-pap | client.id = producer-1 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.566+00:00|INFO|AppInfoParser|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] Kafka version: 3.6.1 17:04:40 policy-pap | compression.type = none 17:04:40 policy-pap | connections.max.idle.ms = 540000 17:04:40 policy-db-migrator | 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.566+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | delivery.timeout.ms = 120000 17:04:40 kafka | [2024-02-16 17:02:33,495] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.5:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 17:04:40 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | enable.idempotence = true 17:04:40 kafka | [2024-02-16 17:02:33,497] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[{"name":"onap.policies.native.apex.ac.element","version":"1.0.0"}],"response":{"responseTo":"01a3277e-772a-4c95-b79a-4ecc96f9bfb9","responseStatus":"SUCCESS","responseMessage":"Apex engine started. Deployed policies are: onap.policies.native.apex.ac.element:1.0.0 "},"messageName":"PDP_STATUS","requestId":"8c6bbbbd-f3ca-4794-ae5b-08b628aefb3f","timestampMs":1708103038549,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 17:04:40 policy-pap | interceptor.classes = [] 17:04:40 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.566+00:00|INFO|AppInfoParser|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-db-migrator | 17:04:40 policy-pap | linger.ms = 0 17:04:40 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.567+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:40 policy-db-migrator | 17:04:40 policy-pap | max.block.ms = 60000 17:04:40 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.567+00:00|INFO|AppInfoParser|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] Kafka startTimeMs: 1708103038566 17:04:40 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 17:04:40 policy-pap | max.in.flight.requests.per.connection = 5 17:04:40 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.567+00:00|INFO|KafkaConsumer|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Subscribed to topic(s): ac_element_msg 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | max.request.size = 1048576 17:04:40 kafka | [2024-02-16 17:02:33,503] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.693+00:00|WARN|NetworkClient|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Error while fetching metadata with correlation id 2 : {ac_element_msg=LEADER_NOT_AVAILABLE} 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 17:04:40 policy-pap | metadata.max.age.ms = 300000 17:04:40 kafka | [2024-02-16 17:02:33,504] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.693+00:00|INFO|Metadata|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | metadata.max.idle.ms = 300000 17:04:40 kafka | [2024-02-16 17:02:33,511] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.694+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | metric.reporters = [] 17:04:40 kafka | [2024-02-16 17:02:33,526] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.703+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] (Re-)joining group 17:04:40 policy-db-migrator | 17:04:40 policy-pap | metrics.num.samples = 2 17:04:40 kafka | [2024-02-16 17:02:33,526] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.706+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Request joining group due to: need to re-join with the given member-id: consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c 17:04:40 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 17:04:40 policy-pap | metrics.recording.level = INFO 17:04:40 kafka | [2024-02-16 17:02:33,527] INFO Kafka startTimeMs: 1708102953519 (org.apache.kafka.common.utils.AppInfoParser) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.706+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | metrics.sample.window.ms = 30000 17:04:40 kafka | [2024-02-16 17:02:33,528] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 17:04:40 policy-apex-pdp | [2024-02-16T17:03:58.706+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] (Re-)joining group 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 17:04:40 policy-pap | partitioner.adaptive.partitioning.enable = true 17:04:40 kafka | [2024-02-16 17:02:33,587] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:01.711+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Successfully joined group with generation Generation{generationId=1, memberId='consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c', protocol='range'} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | partitioner.availability.timeout.ms = 0 17:04:40 kafka | [2024-02-16 17:02:33,607] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:01.712+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Finished assignment for group at generation 1: {consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c=Assignment(partitions=[ac_element_msg-0])} 17:04:40 policy-db-migrator | 17:04:40 policy-pap | partitioner.class = null 17:04:40 kafka | [2024-02-16 17:02:33,688] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:01.717+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Successfully synced group in generation Generation{generationId=1, memberId='consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c', protocol='range'} 17:04:40 policy-db-migrator | 17:04:40 policy-pap | partitioner.ignore.keys = false 17:04:40 kafka | [2024-02-16 17:02:33,688] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:01.718+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Notifying assignor about the new Assignment(partitions=[ac_element_msg-0]) 17:04:40 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 17:04:40 policy-pap | receive.buffer.bytes = 32768 17:04:40 kafka | [2024-02-16 17:02:33,700] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:01.718+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Adding newly assigned partitions: ac_element_msg-0 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | reconnect.backoff.max.ms = 1000 17:04:40 kafka | [2024-02-16 17:02:38,589] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:01.720+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Found no committed offset for partition ac_element_msg-0 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 17:04:40 policy-pap | reconnect.backoff.ms = 50 17:04:40 kafka | [2024-02-16 17:02:38,589] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:01.722+00:00|INFO|SubscriptionState|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Resetting offset for partition ac_element_msg-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | request.timeout.ms = 30000 17:04:40 kafka | [2024-02-16 17:02:51,182] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:21.828+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-db-migrator | 17:04:40 policy-pap | retries = 2147483647 17:04:40 kafka | [2024-02-16 17:02:51,200] INFO Creating topic policy-acruntime-participant with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:04:40 policy-apex-pdp | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.policies.native.apex.ac.element","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"2b39e7c2-14f0-4833-9db3-535a65414122","timestampMs":1708103061766,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | 17:04:40 policy-pap | retry.backoff.ms = 100 17:04:40 policy-pap | sasl.client.callback.handler.class = null 17:04:40 policy-pap | sasl.jaas.config = null 17:04:40 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 17:04:40 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 policy-apex-pdp | [2024-02-16T17:04:21.829+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:apex/tosca/policy/list 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 17:04:40 kafka | [2024-02-16 17:02:51,206] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 17:04:40 policy-pap | sasl.kerberos.service.name = null 17:04:40 policy-apex-pdp | [2024-02-16T17:04:21.939+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Revoke previously assigned partitions ac_element_msg-0 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,253] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:04:40 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 policy-apex-pdp | [2024-02-16T17:04:21.940+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Member consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c sending LeaveGroup request to coordinator kafka:9092 (id: 2147483646 rack: null) due to the consumer is being closed 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,256] INFO [Controller id=1] New topics: [Set(policy-acruntime-participant)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-acruntime-participant,Some(CQ6PJtybRc6NkrAo8RGa4Q),Map(policy-acruntime-participant-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:04:40 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 policy-apex-pdp | [2024-02-16T17:04:21.941+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Resetting generation and member id due to: consumer pro-actively leaving the group 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,257] INFO [Controller id=1] New partition creation callback for policy-acruntime-participant-0 (kafka.controller.KafkaController) 17:04:40 policy-pap | sasl.login.callback.handler.class = null 17:04:40 policy-apex-pdp | [2024-02-16T17:04:21.941+00:00|INFO|ConsumerCoordinator|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] [Consumer clientId=consumer-clamp-grp-3, groupId=clamp-grp] Request joining group due to: consumer pro-actively leaving the group 17:04:40 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 17:04:40 kafka | [2024-02-16 17:02:51,259] INFO [Controller id=1 epoch=1] Changed partition policy-acruntime-participant-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:22.374+00:00|INFO|Metrics|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] Metrics scheduler closed 17:04:40 policy-pap | sasl.login.class = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,259] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:22.374+00:00|INFO|Metrics|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] Closing reporter org.apache.kafka.common.metrics.JmxReporter 17:04:40 policy-pap | sasl.login.connect.timeout.ms = null 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 17:04:40 kafka | [2024-02-16 17:02:51,263] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-acruntime-participant-0 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:22.374+00:00|INFO|Metrics|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] Metrics reporters closed 17:04:40 policy-pap | sasl.login.read.timeout.ms = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,264] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:22.380+00:00|INFO|AppInfoParser|Apex-org.onap.policy.apex.plugins.event.carrier.kafka.ApexKafkaConsumer:DmaapConsumer-3:0] App info kafka.consumer for consumer-clamp-grp-3 unregistered 17:04:40 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,334] INFO [Controller id=1 epoch=1] Changed partition policy-acruntime-participant-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-apex-pdp | [2024-02-16T17:04:22.697+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-acruntime-participant', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-acruntime-participant-0 (state.change.logger) 17:04:40 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"2b39e7c2-14f0-4833-9db3-535a65414122","responseStatus":"SUCCESS","responseMessage":"Pdp update successful. No policies are running."},"messageName":"PDP_STATUS","requestId":"701a1629-f808-462c-9b57-3f980496b3a2","timestampMs":1708103062697,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 17:04:40 kafka | [2024-02-16 17:02:51,347] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 17:04:40 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:40 policy-apex-pdp | [2024-02-16T17:04:22.708+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,351] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 17:04:40 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:40 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"2b39e7c2-14f0-4833-9db3-535a65414122","responseStatus":"SUCCESS","responseMessage":"Pdp update successful. No policies are running."},"messageName":"PDP_STATUS","requestId":"701a1629-f808-462c-9b57-3f980496b3a2","timestampMs":1708103062697,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 17:04:40 kafka | [2024-02-16 17:02:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-acruntime-participant-0 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:40 policy-apex-pdp | [2024-02-16T17:04:22.708+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,352] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 policy-pap | sasl.mechanism = GSSAPI 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,361] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 17:04:40 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,363] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-acruntime-participant', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:40 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 17:04:40 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:40 kafka | [2024-02-16 17:02:51,367] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(RctN_RRuS92Qhw_l4ItHJQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 kafka | [2024-02-16 17:02:51,367] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 17:04:40 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | security.protocol = PLAINTEXT 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 17:04:40 policy-pap | security.providers = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | send.buffer.bytes = 131072 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 17:04:40 policy-pap | ssl.cipher.suites = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 17:04:40 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.engine.factory.class = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.key.password = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 17:04:40 policy-pap | ssl.keystore.certificate.chain = null 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.keystore.key = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 17:04:40 policy-pap | ssl.keystore.location = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.keystore.password = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.keystore.type = JKS 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.protocol = TLSv1.3 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 17:04:40 policy-pap | ssl.provider = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.secure.random.implementation = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 17:04:40 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.truststore.certificates = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.truststore.location = null 17:04:40 policy-pap | ssl.truststore.password = null 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.truststore.type = JKS 17:04:40 policy-pap | transaction.timeout.ms = 60000 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 17:04:40 policy-pap | transactional.id = null 17:04:40 kafka | [2024-02-16 17:02:51,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 policy-pap | 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:25.794+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:04:40 policy-pap | [2024-02-16T17:03:25.811+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.811+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-pap | [2024-02-16T17:03:25.811+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103005811 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.812+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=92462fcf-1a1d-4540-8555-9354aa93d05c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:40 policy-pap | [2024-02-16T17:03:25.812+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=20aa803e-aaa7-49a0-8412-571e8be7fdc7, alive=false, publisher=null]]: starting 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.812+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:40 policy-pap | acks = -1 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | auto.include.jmx.reporter = true 17:04:40 policy-pap | batch.size = 16384 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | bootstrap.servers = [kafka:9092] 17:04:40 policy-pap | buffer.memory = 33554432 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:40 policy-pap | client.id = producer-2 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | compression.type = none 17:04:40 policy-pap | connections.max.idle.ms = 540000 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | delivery.timeout.ms = 120000 17:04:40 policy-pap | enable.idempotence = true 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | interceptor.classes = [] 17:04:40 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | linger.ms = 0 17:04:40 policy-pap | max.block.ms = 60000 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | max.in.flight.requests.per.connection = 5 17:04:40 policy-pap | max.request.size = 1048576 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | metadata.max.age.ms = 300000 17:04:40 policy-pap | metadata.max.idle.ms = 300000 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | metric.reporters = [] 17:04:40 policy-pap | metrics.num.samples = 2 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-pap | metrics.recording.level = INFO 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 kafka | [2024-02-16 17:02:51,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,371] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 17:04:40 kafka | [2024-02-16 17:02:51,371] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 kafka | [2024-02-16 17:02:51,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 17:04:40 kafka | [2024-02-16 17:02:51,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 kafka | [2024-02-16 17:02:51,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 17:04:40 kafka | [2024-02-16 17:02:51,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 kafka | [2024-02-16 17:02:51,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 17:04:40 kafka | [2024-02-16 17:02:51,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 kafka | [2024-02-16 17:02:51,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 17:04:40 kafka | [2024-02-16 17:02:51,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 kafka | [2024-02-16 17:02:51,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | metrics.sample.window.ms = 30000 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 policy-pap | partitioner.adaptive.partitioning.enable = true 17:04:40 policy-pap | partitioner.availability.timeout.ms = 0 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-pap | partitioner.class = null 17:04:40 policy-pap | partitioner.ignore.keys = false 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-pap | receive.buffer.bytes = 32768 17:04:40 policy-pap | reconnect.backoff.max.ms = 1000 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-pap | reconnect.backoff.ms = 50 17:04:40 policy-pap | request.timeout.ms = 30000 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-pap | retries = 2147483647 17:04:40 policy-pap | retry.backoff.ms = 100 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-pap | sasl.client.callback.handler.class = null 17:04:40 policy-pap | sasl.jaas.config = null 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:40 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.kerberos.service.name = null 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:40 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.login.callback.handler.class = null 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.login.class = null 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.login.connect.timeout.ms = null 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0100-pdp.sql 17:04:40 policy-pap | sasl.login.read.timeout.ms = null 17:04:40 kafka | [2024-02-16 17:02:51,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:40 kafka | [2024-02-16 17:02:51,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 17:04:40 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:40 kafka | [2024-02-16 17:02:51,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:40 kafka | [2024-02-16 17:02:51,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 17:04:40 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.mechanism = GSSAPI 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 17:04:40 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 17:04:40 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:40 kafka | [2024-02-16 17:02:51,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:40 kafka | [2024-02-16 17:02:51,400] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 17:04:40 policy-pap | security.protocol = PLAINTEXT 17:04:40 kafka | [2024-02-16 17:02:51,420] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-acruntime-participant-0 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | security.providers = null 17:04:40 kafka | [2024-02-16 17:02:51,423] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-acruntime-participant-0) (kafka.server.ReplicaFetcherManager) 17:04:40 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 17:04:40 policy-pap | send.buffer.bytes = 131072 17:04:40 kafka | [2024-02-16 17:02:51,423] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:40 kafka | [2024-02-16 17:02:51,555] INFO [LogLoader partition=policy-acruntime-participant-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:40 kafka | [2024-02-16 17:02:51,581] INFO Created log for partition policy-acruntime-participant-0 in /var/lib/kafka/data/policy-acruntime-participant-0 with properties {} (kafka.log.LogManager) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | ssl.cipher.suites = null 17:04:40 kafka | [2024-02-16 17:02:51,584] INFO [Partition policy-acruntime-participant-0 broker=1] No checkpointed highwatermark is found for partition policy-acruntime-participant-0 (kafka.cluster.Partition) 17:04:40 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 17:04:40 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:40 kafka | [2024-02-16 17:02:51,587] INFO [Partition policy-acruntime-participant-0 broker=1] Log loaded for partition policy-acruntime-participant-0 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:40 kafka | [2024-02-16 17:02:51,589] INFO [Broker id=1] Leader policy-acruntime-participant-0 with topic id Some(CQ6PJtybRc6NkrAo8RGa4Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 17:04:40 policy-pap | ssl.engine.factory.class = null 17:04:40 kafka | [2024-02-16 17:02:51,639] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-acruntime-participant-0 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | ssl.key.password = null 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,646] INFO [Broker id=1] Finished LeaderAndIsr request in 287ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 17:04:40 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,653] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=CQ6PJtybRc6NkrAo8RGa4Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 policy-pap | ssl.keystore.certificate.chain = null 17:04:40 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.keystore.key = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.keystore.location = null 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.keystore.password = null 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.keystore.type = JKS 17:04:40 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.protocol = TLSv1.3 17:04:40 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.provider = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.secure.random.implementation = null 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.truststore.certificates = null 17:04:40 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.truststore.location = null 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.truststore.password = null 17:04:40 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | ssl.truststore.type = JKS 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | transaction.timeout.ms = 60000 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | transactional.id = null 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,659] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:40 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.813+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.816+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:40 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.816+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:40 policy-db-migrator | JOIN pdpstatistics b 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 17:04:40 policy-pap | [2024-02-16T17:03:25.816+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708103005816 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-db-migrator | SET a.id = b.id 17:04:40 policy-pap | [2024-02-16T17:03:25.817+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=20aa803e-aaa7-49a0-8412-571e8be7fdc7, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:25.817+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:25.817+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.819+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 17:04:40 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.819+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.822+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 17:04:40 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.824+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.824+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.826+00:00|INFO|TimerManager|Thread-9] timer manager update started 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.827+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 17:04:40 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.828+00:00|INFO|ServiceManager|main] Policy PAP started 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.829+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:25.830+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 12.729 seconds (process running for 13.606) 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.263+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.264+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.264+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 17:04:40 kafka | [2024-02-16 17:02:51,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.264+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.266+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 5 with epoch 0 17:04:40 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.266+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 4 with epoch 0 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.266+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.280+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] (Re-)joining group 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.304+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Request joining group due to: need to re-join with the given member-id: consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3-af3a8725-a21a-4f60-8627-4eab7e3b5895 17:04:40 policy-db-migrator | > upgrade 0210-sequence.sql 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.304+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.304+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] (Re-)joining group 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.356+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.365+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:26.365+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: vB0B1qTrTYKUb3QN_6Wq6A 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:26.368+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:40 kafka | [2024-02-16 17:02:51,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0220-sequence.sql 17:04:40 policy-pap | [2024-02-16T17:03:26.373+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 17:04:40 kafka | [2024-02-16 17:02:51,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:26.376+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-3aa6bf85-4fda-43f2-b6c8-6e2b6568a88c 17:04:40 kafka | [2024-02-16 17:02:51,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 17:04:40 policy-pap | [2024-02-16T17:03:26.376+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:40 kafka | [2024-02-16 17:02:51,672] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:26.376+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 17:04:40 kafka | [2024-02-16 17:02:51,672] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:29.309+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Successfully joined group with generation Generation{generationId=1, memberId='consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3-af3a8725-a21a-4f60-8627-4eab7e3b5895', protocol='range'} 17:04:40 kafka | [2024-02-16 17:02:51,672] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:29.316+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Finished assignment for group at generation 1: {consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3-af3a8725-a21a-4f60-8627-4eab7e3b5895=Assignment(partitions=[policy-pdp-pap-0])} 17:04:40 kafka | [2024-02-16 17:02:51,672] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 17:04:40 policy-pap | [2024-02-16T17:03:29.326+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Successfully synced group in generation Generation{generationId=1, memberId='consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3-af3a8725-a21a-4f60-8627-4eab7e3b5895', protocol='range'} 17:04:40 kafka | [2024-02-16 17:02:51,672] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:29.326+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 17:04:40 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 17:04:40 policy-pap | [2024-02-16T17:03:29.331+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Adding newly assigned partitions: policy-pdp-pap-0 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:29.340+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Found no committed offset for partition policy-pdp-pap-0 17:04:40 policy-pap | [2024-02-16T17:03:29.349+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3, groupId=084a2e58-01c1-4612-9881-9e51d9ffa3ed] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:40 policy-pap | [2024-02-16T17:03:29.379+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-3aa6bf85-4fda-43f2-b6c8-6e2b6568a88c', protocol='range'} 17:04:40 policy-pap | [2024-02-16T17:03:29.380+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-3aa6bf85-4fda-43f2-b6c8-6e2b6568a88c=Assignment(partitions=[policy-pdp-pap-0])} 17:04:40 policy-pap | [2024-02-16T17:03:29.386+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-3aa6bf85-4fda-43f2-b6c8-6e2b6568a88c', protocol='range'} 17:04:40 policy-pap | [2024-02-16T17:03:29.386+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:04:40 policy-pap | [2024-02-16T17:03:29.386+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 17:04:40 policy-pap | [2024-02-16T17:03:29.388+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 17:04:40 policy-pap | [2024-02-16T17:03:29.390+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:40 policy-pap | [2024-02-16T17:03:47.535+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 17:04:40 policy-pap | [] 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:47.536+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"89129af7-b008-4a1b-9ec6-eb95469de049","timestampMs":1708103027493,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 policy-pap | [2024-02-16T17:03:47.537+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 17:04:40 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"89129af7-b008-4a1b-9ec6-eb95469de049","timestampMs":1708103027493,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 policy-pap | [2024-02-16T17:03:47.545+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:47.856+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting 17:04:40 policy-pap | [2024-02-16T17:03:47.856+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting listener 17:04:40 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:47.857+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting timer 17:04:40 policy-pap | [2024-02-16T17:03:47.860+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c4e106f0-4746-43d2-a87a-ad001ce96df0, expireMs=1708103057860] 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:47.862+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting enqueue 17:04:40 policy-pap | [2024-02-16T17:03:47.862+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=c4e106f0-4746-43d2-a87a-ad001ce96df0, expireMs=1708103057860] 17:04:40 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:47.863+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate started 17:04:40 policy-pap | [2024-02-16T17:03:47.869+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c4e106f0-4746-43d2-a87a-ad001ce96df0","timestampMs":1708103027712,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | [2024-02-16T17:03:47.948+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c4e106f0-4746-43d2-a87a-ad001ce96df0","timestampMs":1708103027712,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | [2024-02-16T17:03:47.949+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:04:40 policy-pap | [2024-02-16T17:03:47.954+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c4e106f0-4746-43d2-a87a-ad001ce96df0","timestampMs":1708103027712,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:47.954+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:04:40 policy-pap | [2024-02-16T17:03:47.989+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"660a8e24-1e60-4f24-86fc-3b683cbb50d9","timestampMs":1708103027972,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 policy-db-migrator | > upgrade 0120-toscatrigger.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:47.998+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"660a8e24-1e60-4f24-86fc-3b683cbb50d9","timestampMs":1708103027972,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup"} 17:04:40 policy-pap | [2024-02-16T17:03:47.999+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 17:04:40 policy-pap | [2024-02-16T17:03:47.999+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c4e106f0-4746-43d2-a87a-ad001ce96df0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c9d56859-f8bc-4d48-a940-7b0a56f9b061","timestampMs":1708103027976,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,673] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.036+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping 17:04:40 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:48.041+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 17:04:40 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c4e106f0-4746-43d2-a87a-ad001ce96df0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c9d56859-f8bc-4d48-a940-7b0a56f9b061","timestampMs":1708103027976,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.041+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c4e106f0-4746-43d2-a87a-ad001ce96df0 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping enqueue 17:04:40 policy-pap | [2024-02-16T17:03:48.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping timer 17:04:40 policy-pap | [2024-02-16T17:03:48.042+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c4e106f0-4746-43d2-a87a-ad001ce96df0, expireMs=1708103057860] 17:04:40 policy-pap | [2024-02-16T17:03:48.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping listener 17:04:40 policy-pap | [2024-02-16T17:03:48.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopped 17:04:40 policy-pap | [2024-02-16T17:03:48.051+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate successful 17:04:40 policy-pap | [2024-02-16T17:03:48.051+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f start publishing next request 17:04:40 policy-pap | [2024-02-16T17:03:48.052+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange starting 17:04:40 policy-pap | [2024-02-16T17:03:48.052+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange starting listener 17:04:40 policy-pap | [2024-02-16T17:03:48.052+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange starting timer 17:04:40 policy-pap | [2024-02-16T17:03:48.052+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=042a6145-56dc-4711-9864-8edc62c6935b, expireMs=1708103058052] 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:48.052+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange starting enqueue 17:04:40 policy-pap | [2024-02-16T17:03:48.052+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange started 17:04:40 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:48.054+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29998ms Timer [name=042a6145-56dc-4711-9864-8edc62c6935b, expireMs=1708103058052] 17:04:40 policy-pap | [2024-02-16T17:03:48.055+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"042a6145-56dc-4711-9864-8edc62c6935b","timestampMs":1708103027713,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | [2024-02-16T17:03:48.067+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"042a6145-56dc-4711-9864-8edc62c6935b","timestampMs":1708103027713,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | [2024-02-16T17:03:48.067+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:48.084+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"042a6145-56dc-4711-9864-8edc62c6935b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"eb9c4d77-f65c-4fdc-863d-a8584c2b63f2","timestampMs":1708103028069,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | > upgrade 0140-toscaparameter.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:48.084+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 042a6145-56dc-4711-9864-8edc62c6935b 17:04:40 policy-pap | [2024-02-16T17:03:48.130+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"042a6145-56dc-4711-9864-8edc62c6935b","timestampMs":1708103027713,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | [2024-02-16T17:03:48.130+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 17:04:40 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:48.134+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"042a6145-56dc-4711-9864-8edc62c6935b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"eb9c4d77-f65c-4fdc-863d-a8584c2b63f2","timestampMs":1708103028069,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | [2024-02-16 17:02:51,674] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.134+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange stopping 17:04:40 policy-pap | [2024-02-16T17:03:48.134+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange stopping enqueue 17:04:40 policy-pap | [2024-02-16T17:03:48.134+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange stopping timer 17:04:40 policy-pap | [2024-02-16T17:03:48.134+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=042a6145-56dc-4711-9864-8edc62c6935b, expireMs=1708103058052] 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange stopping listener 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange stopped 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpStateChange successful 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f start publishing next request 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting listener 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting timer 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=75d5babd-56a0-4abd-aad7-d01c728af538, expireMs=1708103058135] 17:04:40 policy-db-migrator | > upgrade 0150-toscaproperty.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:48.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting enqueue 17:04:40 policy-pap | [2024-02-16T17:03:48.136+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate started 17:04:40 policy-pap | [2024-02-16T17:03:48.136+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"75d5babd-56a0-4abd-aad7-d01c728af538","timestampMs":1708103028116,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | [2024-02-16T17:03:48.158+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"75d5babd-56a0-4abd-aad7-d01c728af538","timestampMs":1708103028116,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-pap | [2024-02-16T17:03:48.158+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:04:40 policy-pap | [2024-02-16T17:03:48.159+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"75d5babd-56a0-4abd-aad7-d01c728af538","timestampMs":1708103028116,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.160+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.164+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,675] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 17:04:40 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"75d5babd-56a0-4abd-aad7-d01c728af538","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"893654c5-ad78-4d7e-a165-4a0ab3a005f9","timestampMs":1708103028151,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.164+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.164+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping enqueue 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.164+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping timer 17:04:40 kafka | [2024-02-16 17:02:51,676] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,677] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.164+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=75d5babd-56a0-4abd-aad7-d01c728af538, expireMs=1708103058135] 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.164+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping listener 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.164+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopped 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:48.167+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"75d5babd-56a0-4abd-aad7-d01c728af538","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"893654c5-ad78-4d7e-a165-4a0ab3a005f9","timestampMs":1708103028151,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 17:04:40 policy-pap | [2024-02-16T17:03:48.168+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 75d5babd-56a0-4abd-aad7-d01c728af538 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:48.179+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate successful 17:04:40 policy-pap | [2024-02-16T17:03:48.179+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f has no more requests 17:04:40 policy-pap | [2024-02-16T17:03:56.585+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:04:40 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:56.585+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' 17:04:40 policy-pap | [2024-02-16T17:03:56.588+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 3 ms 17:04:40 policy-pap | [2024-02-16T17:03:56.911+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group defaultGroup 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:57.545+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.policies.native.apex.ac.element 1.0.0 17:04:40 policy-pap | [2024-02-16T17:03:57.549+00:00|INFO|SessionData|http-nio-6969-exec-1] add update apex-91910ceb-155f-47b8-a743-3152f517fc5f defaultGroup apex policies=1 17:04:40 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:57.552+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group defaultGroup 17:04:40 policy-pap | [2024-02-16T17:03:57.553+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group defaultGroup 17:04:40 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:57.594+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=apex, policy=onap.policies.native.apex.ac.element 1.0.0, action=DEPLOYMENT, timestamp=2024-02-16T17:03:57Z, user=policyadmin)] 17:04:40 policy-pap | [2024-02-16T17:03:57.651+00:00|INFO|ServiceManager|http-nio-6969-exec-1] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:57.651+00:00|INFO|ServiceManager|http-nio-6969-exec-1] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting listener 17:04:40 policy-pap | [2024-02-16T17:03:57.651+00:00|INFO|ServiceManager|http-nio-6969-exec-1] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting timer 17:04:40 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:57.651+00:00|INFO|TimerManager|http-nio-6969-exec-1] update timer registered Timer [name=01a3277e-772a-4c95-b79a-4ecc96f9bfb9, expireMs=1708103067651] 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:57.652+00:00|INFO|ServiceManager|http-nio-6969-exec-1] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting enqueue 17:04:40 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:57.652+00:00|INFO|ServiceManager|http-nio-6969-exec-1] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate started 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:57.662+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 0100-upgrade.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | select 'upgrade to 1100 completed' as msg 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | msg 17:04:40 policy-db-migrator | upgrade to 1100 completed 17:04:40 policy-db-migrator | 17:04:40 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"01a3277e-772a-4c95-b79a-4ecc96f9bfb9","timestampMs":1708103037546,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:57.675+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"01a3277e-772a-4c95-b79a-4ecc96f9bfb9","timestampMs":1708103037546,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:57.680+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:04:40 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:57.693+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.native.Apex","type_version":"1.0.0","properties":{"eventInputParameters":{"DmaapConsumer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","groupId":"clamp-grp","enableAutoCommit":true,"autoCommitTime":1000,"sessionTimeout":30000,"consumerPollTime":100,"consumerTopicList":["ac_element_msg"],"keyDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","valueDeserializer":"org.apache.kafka.common.serialization.StringDeserializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseEvent"}},"eventName":"AcElementEvent","eventNameFilter":"AcElementEvent"}},"engineServiceParameters":{"name":"MyApexEngine","version":"0.0.1","id":45,"instanceCount":2,"deploymentPort":12561,"engineParameters":{"executorParameters":{"JAVASCRIPT":{"parameterClassName":"org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"}},"contextParameters":{"parameterClassName":"org.onap.policy.apex.context.parameters.ContextParameters","schemaParameters":{"Json":{"parameterClassName":"org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"}}}},"policy_type_impl":{"policies":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"policyMap":{"entry":[{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"policyKey":{"name":"ReceiveEventPolicy","version":"0.0.1"},"template":"Freestyle","state":{"entry":[{"key":"DecideForwardingState","value":{"stateKey":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DecideForwardingState"},"trigger":{"name":"AcElementEvent","version":"0.0.1"},"stateOutputs":{"entry":[{"key":"CreateForwardPayload","value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"},"outgoingEvent":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"outgoingEventReference":[{"name":"DmaapResponseStatusEvent","version":"0.0.1"}],"nextState":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"}}}]},"contextAlbumReference":[],"taskSelectionLogic":{"key":{"parentKeyName":"NULL","parentKeyVersion":"0.0.0","parentLocalName":"NULL","localName":"NULL"},"logicFlavour":"UNDEFINED","logic":""},"stateFinalizerLogicMap":{"entry":[]},"defaultTask":{"name":"ForwardPayloadTask","version":"0.0.1"},"taskReferences":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"ReceiveEventPolicy"},"outputType":"DIRECT","output":{"parentKeyName":"ReceiveEventPolicy","parentKeyVersion":"0.0.1","parentLocalName":"DecideForwardingState","localName":"CreateForwardPayload"}}}]}}}]},"firstState":"DecideForwardingState"}}]}},"tasks":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"taskMap":{"entry":[{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"inputEvent":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"},"outputEvents":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]},"taskParameters":{"entry":[]},"contextAlbumReference":[{"name":"ACElementAlbum","version":"0.0.1"}],"taskLogic":{"key":{"parentKeyName":"ForwardPayloadTask","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"TaskLogic"},"logicFlavour":"JAVASCRIPT","logic":"/*\n * ============LICENSE_START=======================================================\n * Copyright (C) 2022 Nordix. All rights reserved.\n * ================================================================================\n * Licensed under the Apache License, Version 2.0 (the 'License');\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an 'AS IS' BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n * SPDX-License-Identifier: Apache-2.0\n * ============LICENSE_END=========================================================\n */\n\nexecutor.logger.info(executor.subject.id);\nexecutor.logger.info(executor.inFields);\n\nvar msgResponse = executor.inFields.get('DmaapResponseEvent');\nexecutor.logger.info('Task in progress with mesages: ' + msgResponse);\n\nvar elementId = msgResponse.get('elementId').get('name');\n\nif (msgResponse.get('messageType') == 'STATUS' &&\n (elementId == 'onap.policy.clamp.ac.startertobridge'\n || elementId == 'onap.policy.clamp.ac.bridgetosink')) {\n\n var receiverId = '';\n if (elementId == 'onap.policy.clamp.ac.startertobridge') {\n receiverId = 'onap.policy.clamp.ac.bridge';\n } else {\n receiverId = 'onap.policy.clamp.ac.sink';\n }\n\n var elementIdResponse = new java.util.HashMap();\n elementIdResponse.put('name', receiverId);\n elementIdResponse.put('version', msgResponse.get('elementId').get('version'));\n\n var dmaapResponse = new java.util.HashMap();\n dmaapResponse.put('elementId', elementIdResponse);\n\n var message = msgResponse.get('message') + ' trace added from policy';\n dmaapResponse.put('message', message);\n dmaapResponse.put('messageType', 'STATUS');\n dmaapResponse.put('messageId', msgResponse.get('messageId'));\n dmaapResponse.put('timestamp', msgResponse.get('timestamp'));\n\n executor.logger.info('Sending forwarding Event to Ac element: ' + dmaapResponse);\n\n executor.outFields.put('DmaapResponseStatusEvent', dmaapResponse);\n}\n\ntrue;"}}}]}},"events":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"eventMap":{"entry":[{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"Dmaap","target":"APEX","parameter":{"entry":[{"key":"DmaapResponseEvent","value":{"key":{"parentKeyName":"AcElementEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":"ENTRY"}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"Dmaap","parameter":{"entry":[{"key":"DmaapResponseStatusEvent","value":{"key":{"parentKeyName":"DmaapResponseStatusEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"DmaapResponseStatusEvent"},"fieldSchemaKey":{"name":"ACEventType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"nameSpace":"org.onap.policy.apex.ac.element","source":"APEX","target":"file","parameter":{"entry":[{"key":"final_status","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"final_status"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}},{"key":"message","value":{"key":{"parentKeyName":"LogEvent","parentKeyVersion":"0.0.1","parentLocalName":"NULL","localName":"message"},"fieldSchemaKey":{"name":"SimpleStringType","version":"0.0.1"},"optional":false}}]},"toscaPolicyState":""}}]}},"albums":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"albums":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"scope":"policy","isWritable":true,"itemSchema":{"name":"ACEventType","version":"0.0.1"}}}]}},"schemas":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"schemas":{"entry":[{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"schemaFlavour":"Json","schemaDefinition":"{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"elementId\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"version\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"name\",\n \"version\"\n ]\n },\n \"message\": {\n \"type\": \"string\"\n },\n \"messageType\": {\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"elementId\",\n \"message\",\n \"messageType\"\n ]\n}"}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.Integer"}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.lang.String"}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"schemaFlavour":"Java","schemaDefinition":"java.util.UUID"}}]}},"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"keyInformation":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"keyInfoMap":{"entry":[{"key":{"name":"ACElementAlbum","version":"0.0.1"},"value":{"key":{"name":"ACElementAlbum","version":"0.0.1"},"UUID":"7cddfab8-6d3f-3f7f-8ac3-e2eb5979c900","description":"Generated description for concept referred to by key \"ACElementAlbum:0.0.1\""}},{"key":{"name":"ACEventType","version":"0.0.1"},"value":{"key":{"name":"ACEventType","version":"0.0.1"},"UUID":"dab78794-b666-3929-a75b-70d634b04fe5","description":"Generated description for concept referred to by key \"ACEventType:0.0.1\""}},{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy","version":"0.0.1"},"UUID":"da478611-7d77-3c46-b4be-be968769ba4e","description":"Generated description for concept referred to by key \"APEXacElementPolicy:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Albums","version":"0.0.1"},"UUID":"fa8dc15e-8c8d-3de3-a0f8-585b76511175","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Albums:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Events","version":"0.0.1"},"UUID":"8508cd65-8dd2-342d-a5c6-1570810dbe2b","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Events:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_KeyInfo","version":"0.0.1"},"UUID":"09e6927d-c5ac-3779-919f-9333994eed22","description":"Generated description for concept referred to by key \"APEXacElementPolicy_KeyInfo:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Policies","version":"0.0.1"},"UUID":"cade3c9a-1600-3642-a6f4-315612187f46","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Policies:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Schemas","version":"0.0.1"},"UUID":"5bb4a8e9-35fa-37db-9a49-48ef036a7ba9","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Schemas:0.0.1\""}},{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"value":{"key":{"name":"APEXacElementPolicy_Tasks","version":"0.0.1"},"UUID":"2527eeec-0d1f-3094-ad3f-212622b12836","description":"Generated description for concept referred to by key \"APEXacElementPolicy_Tasks:0.0.1\""}},{"key":{"name":"AcElementEvent","version":"0.0.1"},"value":{"key":{"name":"AcElementEvent","version":"0.0.1"},"UUID":"32c013e2-2740-3986-a626-cbdf665b63e9","description":"Generated description for concept referred to by key \"AcElementEvent:0.0.1\""}},{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"value":{"key":{"name":"DmaapResponseStatusEvent","version":"0.0.1"},"UUID":"2715cb6c-2778-3461-8b69-871e79f95935","description":"Generated description for concept referred to by key \"DmaapResponseStatusEvent:0.0.1\""}},{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"value":{"key":{"name":"ForwardPayloadTask","version":"0.0.1"},"UUID":"51defa03-1ecf-3314-bf34-2a652bce57fa","description":"Generated description for concept referred to by key \"ForwardPayloadTask:0.0.1\""}},{"key":{"name":"LogEvent","version":"0.0.1"},"value":{"key":{"name":"LogEvent","version":"0.0.1"},"UUID":"c540f048-96af-35e3-a36e-e9c29377cba7","description":"Generated description for concept referred to by key \"LogEvent:0.0.1\""}},{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"value":{"key":{"name":"ReceiveEventPolicy","version":"0.0.1"},"UUID":"568b7345-9de1-36d3-b6a3-9b857e6809a1","description":"Generated description for concept referred to by key \"ReceiveEventPolicy:0.0.1\""}},{"key":{"name":"SimpleIntType","version":"0.0.1"},"value":{"key":{"name":"SimpleIntType","version":"0.0.1"},"UUID":"153791fd-ae0a-36a7-88a5-309a7936415d","description":"Generated description for concept referred to by key \"SimpleIntType:0.0.1\""}},{"key":{"name":"SimpleStringType","version":"0.0.1"},"value":{"key":{"name":"SimpleStringType","version":"0.0.1"},"UUID":"8a4957cf-9493-3a76-8c22-a208e23259af","description":"Generated description for concept referred to by key \"SimpleStringType:0.0.1\""}},{"key":{"name":"UUIDType","version":"0.0.1"},"value":{"key":{"name":"UUIDType","version":"0.0.1"},"UUID":"6a8cc68e-dfc8-3403-9c6d-071c886b319c","description":"Generated description for concept referred to by key \"UUIDType:0.0.1\""}}]}}}},"eventOutputParameters":{"logOutputter":{"carrierTechnologyParameters":{"carrierTechnology":"FILE","parameters":{"fileName":"outputevents.log"}},"eventProtocolParameters":{"eventProtocol":"JSON"}},"DmaapReplyProducer":{"carrierTechnologyParameters":{"carrierTechnology":"KAFKA","parameterClassName":"org.onap.policy.apex.plugins.event.carrier.kafka.KafkaCarrierTechnologyParameters","parameters":{"bootstrapServers":"kafka:9092","acks":"all","retries":0,"batchSize":16384,"lingerTime":1,"bufferMemory":33554432,"producerTopic":"policy_update_msg","keySerializer":"org.apache.kafka.common.serialization.StringSerializer","valueSerializer":"org.apache.kafka.common.serialization.StringSerializer","kafkaProperties":[]}},"eventProtocolParameters":{"eventProtocol":"JSON","parameters":{"pojoField":"DmaapResponseStatusEvent"}},"eventNameFilter":"LogEvent|DmaapResponseStatusEvent"}}},"name":"onap.policies.native.apex.ac.element","version":"1.0.0","metadata":{"policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"01a3277e-772a-4c95-b79a-4ecc96f9bfb9","timestampMs":1708103037546,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:57.696+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:58.562+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[{"name":"onap.policies.native.apex.ac.element","version":"1.0.0"}],"response":{"responseTo":"01a3277e-772a-4c95-b79a-4ecc96f9bfb9","responseStatus":"SUCCESS","responseMessage":"Apex engine started. Deployed policies are: onap.policies.native.apex.ac.element:1.0.0 "},"messageName":"PDP_STATUS","requestId":"8c6bbbbd-f3ca-4794-ae5b-08b628aefb3f","timestampMs":1708103038549,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:58.563+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[{"name":"onap.policies.native.apex.ac.element","version":"1.0.0"}],"response":{"responseTo":"01a3277e-772a-4c95-b79a-4ecc96f9bfb9","responseStatus":"SUCCESS","responseMessage":"Apex engine started. Deployed policies are: onap.policies.native.apex.ac.element:1.0.0 "},"messageName":"PDP_STATUS","requestId":"8c6bbbbd-f3ca-4794-ae5b-08b628aefb3f","timestampMs":1708103038549,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:58.564+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 01a3277e-772a-4c95-b79a-4ecc96f9bfb9 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:58.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:58.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping enqueue 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-db-migrator | > upgrade 0120-audit_sequence.sql 17:04:40 policy-pap | [2024-02-16T17:03:58.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping timer 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:58.565+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=01a3277e-772a-4c95-b79a-4ecc96f9bfb9, expireMs=1708103067651] 17:04:40 policy-pap | [2024-02-16T17:03:58.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping listener 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:04:40 policy-pap | [2024-02-16T17:03:58.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopped 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:58.586+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate successful 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:03:58.586+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f has no more requests 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:03:58.588+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 17:04:40 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.Apex","policy-type-version":"1.0.0","policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:03:58.704+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Error while fetching metadata with correlation id 4 : {policy-notification=LEADER_NOT_AVAILABLE} 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:04:17.861+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c4e106f0-4746-43d2-a87a-ad001ce96df0, expireMs=1708103057860] 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:04:18.053+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=042a6145-56dc-4711-9864-8edc62c6935b, expireMs=1708103058052] 17:04:40 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:21.443+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group defaultGroup 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:04:21.766+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.policies.native.apex.ac.element 1.0.0 17:04:40 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:04:40 policy-pap | [2024-02-16T17:04:21.766+00:00|INFO|SessionData|http-nio-6969-exec-4] add update apex-91910ceb-155f-47b8-a743-3152f517fc5f defaultGroup apex policies=0 17:04:40 policy-pap | [2024-02-16T17:04:21.766+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group defaultGroup 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:04:21.766+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group defaultGroup 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:04:21.795+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=apex, policy=onap.policies.native.apex.ac.element 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-16T17:04:21Z, user=policyadmin)] 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:21.815+00:00|INFO|ServiceManager|http-nio-6969-exec-4] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting 17:04:40 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:21.815+00:00|INFO|ServiceManager|http-nio-6969-exec-4] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting listener 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,679] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:21.815+00:00|INFO|ServiceManager|http-nio-6969-exec-4] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting timer 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,692] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-acruntime-participant', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-acruntime-participant-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:21.815+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=2b39e7c2-14f0-4833-9db3-535a65414122, expireMs=1708103091815] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | TRUNCATE TABLE sequence 17:04:40 policy-pap | [2024-02-16T17:04:21.816+00:00|INFO|ServiceManager|http-nio-6969-exec-4] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate starting enqueue 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:04:21.816+00:00|INFO|ServiceManager|http-nio-6969-exec-4] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate started 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:04:21.816+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 17:04:40 policy-db-migrator | 17:04:40 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.Apex","policy-type-version":"1.0.0","policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0","success-count":0,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 17:04:40 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 17:04:40 policy-pap | [2024-02-16T17:04:21.816+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.policies.native.apex.ac.element","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"2b39e7c2-14f0-4833-9db3-535a65414122","timestampMs":1708103061766,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 17:04:40 policy-pap | [2024-02-16T17:04:21.816+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=2b39e7c2-14f0-4833-9db3-535a65414122, expireMs=1708103091815] 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:04:21.828+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-db-migrator | 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.policies.native.apex.ac.element","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"2b39e7c2-14f0-4833-9db3-535a65414122","timestampMs":1708103061766,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:04:21.828+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:04:40 policy-db-migrator | DROP TABLE pdpstatistics 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:04:21.852+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 kafka | [2024-02-16 17:02:51,693] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | {"source":"pap-dbd315b1-297c-4cfc-bbbb-4a85025cd3a3","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.policies.native.apex.ac.element","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"2b39e7c2-14f0-4833-9db3-535a65414122","timestampMs":1708103061766,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,699] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:21.852+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:04:40 policy-pap | [2024-02-16T17:04:22.708+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:40 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 17:04:40 kafka | [2024-02-16 17:02:51,703] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 17:04:40 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"2b39e7c2-14f0-4833-9db3-535a65414122","responseStatus":"SUCCESS","responseMessage":"Pdp update successful. No policies are running."},"messageName":"PDP_STATUS","requestId":"701a1629-f808-462c-9b57-3f980496b3a2","timestampMs":1708103062697,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:22.708+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 2b39e7c2-14f0-4833-9db3-535a65414122 17:04:40 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:22.712+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:40 policy-db-migrator | -------------- 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"2b39e7c2-14f0-4833-9db3-535a65414122","responseStatus":"SUCCESS","responseMessage":"Pdp update successful. No policies are running."},"messageName":"PDP_STATUS","requestId":"701a1629-f808-462c-9b57-3f980496b3a2","timestampMs":1708103062697,"name":"apex-91910ceb-155f-47b8-a743-3152f517fc5f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:22.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping 17:04:40 policy-db-migrator | 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:22.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping enqueue 17:04:40 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-pap | [2024-02-16T17:04:22.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping timer 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-db-migrator | DROP TABLE statistics_sequence 17:04:40 policy-pap | [2024-02-16T17:04:22.713+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=2b39e7c2-14f0-4833-9db3-535a65414122, expireMs=1708103091815] 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | -------------- 17:04:40 policy-pap | [2024-02-16T17:04:22.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopping listener 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 17:04:40 policy-pap | [2024-02-16T17:04:22.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate stopped 17:04:40 policy-db-migrator | policyadmin: OK: upgrade (1300) 17:04:40 policy-pap | [2024-02-16T17:04:22.776+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f PdpUpdate successful 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | name version 17:04:40 policy-db-migrator | policyadmin 1300 17:04:40 policy-pap | [2024-02-16T17:04:22.776+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 17:04:40 policy-db-migrator | ID script operation from_version to_version tag success atTime 17:04:40 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.Apex","policy-type-version":"1.0.0","policy-id":"onap.policies.native.apex.ac.element","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}]} 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:34 17:04:40 policy-pap | [2024-02-16T17:04:22.776+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-91910ceb-155f-47b8-a743-3152f517fc5f has no more requests 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:35 17:04:40 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:36 17:04:40 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:36 17:04:40 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:36 17:04:40 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:36 17:04:40 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:36 17:04:40 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:36 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:36 17:04:40 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:36 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:37 17:04:40 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:38 17:04:40 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:39 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:40 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,705] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:04:40 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 17:04:40 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1602241702340800u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 17:04:40 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 17:04:40 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:41 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 17:04:40 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 17:04:40 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 17:04:40 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 17:04:40 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 17:04:40 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 17:04:40 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 17:04:40 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 17:04:40 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 17:04:40 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 17:04:40 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 17:04:40 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1602241702340900u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 17:04:40 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:42 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 17:04:40 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:43 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 17:04:40 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:43 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 17:04:40 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:43 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 17:04:40 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:43 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 17:04:40 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:43 17:04:40 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:43 17:04:40 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:43 17:04:40 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1602241702341000u 1 2024-02-16 17:02:43 17:04:40 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1602241702341100u 1 2024-02-16 17:02:43 17:04:40 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1602241702341200u 1 2024-02-16 17:02:43 17:04:40 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1602241702341200u 1 2024-02-16 17:02:43 17:04:40 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1602241702341200u 1 2024-02-16 17:02:43 17:04:40 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1602241702341200u 1 2024-02-16 17:02:44 17:04:40 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1602241702341300u 1 2024-02-16 17:02:44 17:04:40 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1602241702341300u 1 2024-02-16 17:02:44 17:04:40 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1602241702341300u 1 2024-02-16 17:02:44 17:04:40 policy-db-migrator | policyadmin: OK @ 1300 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,723] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,724] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,736] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 17:04:40 kafka | [2024-02-16 17:02:51,739] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,767] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:51,771] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:51,771] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,777] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,777] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,834] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:51,835] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:51,835] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,835] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,835] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,867] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:51,867] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:51,867] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,868] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,868] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,894] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:51,895] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:51,896] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,896] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,896] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:51,932] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:51,932] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:51,933] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,933] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:51,933] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,090] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,090] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,090] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,090] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,090] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,104] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,105] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,105] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,105] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,105] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,117] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,117] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,117] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,118] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,118] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,128] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,128] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,128] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,129] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,129] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,143] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,145] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,145] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,145] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,145] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,158] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,162] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,162] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,162] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,162] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,176] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,177] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,177] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,177] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,177] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,191] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,192] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,192] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,192] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,192] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,204] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,205] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,205] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,205] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,205] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,212] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,213] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,213] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,213] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,213] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,224] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,225] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,225] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,225] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,225] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,241] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,242] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,243] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,243] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,243] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,255] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,256] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,256] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,256] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,256] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,269] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,269] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,269] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,270] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,270] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,279] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,280] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,280] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,280] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,280] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,299] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,300] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,300] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,300] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,300] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,316] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,317] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,317] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,317] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,317] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,333] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,334] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,334] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,334] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,334] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,347] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,348] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,348] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,348] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,348] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,364] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,364] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,364] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,364] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,365] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,374] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,375] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,375] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,375] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,375] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,390] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,391] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,391] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,391] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,391] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,439] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,440] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,440] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,440] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,441] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,452] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,452] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,452] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,452] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,452] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,461] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,462] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,462] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,462] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,462] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,481] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,482] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,482] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,482] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,482] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,498] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,499] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,499] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,499] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,499] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,511] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,512] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,512] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,512] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,513] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,529] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,532] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,532] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,532] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,532] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,589] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,590] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,590] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,590] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,590] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,609] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,610] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,610] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,610] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,610] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,625] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,626] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,626] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,626] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,630] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,651] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,653] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,653] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,653] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,653] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,671] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,672] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,672] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,673] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,673] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,684] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,685] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,685] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,685] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,686] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,700] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,701] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,701] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,701] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,701] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,721] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,722] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,722] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,722] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,722] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,738] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,738] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,738] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,738] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,739] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,795] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,799] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,799] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,799] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,799] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,806] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,807] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,807] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,807] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,807] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,817] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,818] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,818] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,818] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,818] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,831] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,832] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,833] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,833] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,833] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,845] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,846] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,846] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,846] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,848] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,867] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,869] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,869] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,869] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,869] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,882] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:02:52,883] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:02:52,883] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,883] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:02:52,883] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(RctN_RRuS92Qhw_l4ItHJQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,894] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,895] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,895] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,895] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,895] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,895] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,895] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,895] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,895] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,897] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,901] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,906] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,906] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,906] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,907] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,908] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,909] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:52,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,915] INFO [Broker id=1] Finished LeaderAndIsr request in 1211ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,916] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=RctN_RRuS92Qhw_l4ItHJQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,920] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 17 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,922] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 17 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,923] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,925] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:02:52,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,927] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,927] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,927] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,927] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 54 milliseconds for epoch 0, of which 53 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 54 milliseconds for epoch 0, of which 54 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 54 milliseconds for epoch 0, of which 54 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 53 milliseconds for epoch 0, of which 53 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 54 milliseconds for epoch 0, of which 54 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 55 milliseconds for epoch 0, of which 54 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 56 milliseconds for epoch 0, of which 55 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 56 milliseconds for epoch 0, of which 56 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 56 milliseconds for epoch 0, of which 56 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,967] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 56 milliseconds for epoch 0, of which 55 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,968] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 57 milliseconds for epoch 0, of which 57 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,968] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 57 milliseconds for epoch 0, of which 57 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,969] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,969] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,969] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,969] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 57 milliseconds for epoch 0, of which 57 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,970] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,970] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,970] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,970] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,971] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 59 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,971] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,971] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,971] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 58 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 59 milliseconds for epoch 0, of which 58 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 59 milliseconds for epoch 0, of which 59 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 59 milliseconds for epoch 0, of which 59 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 61 milliseconds for epoch 0, of which 61 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 60 milliseconds for epoch 0, of which 60 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,975] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 61 milliseconds for epoch 0, of which 61 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:52,975] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 61 milliseconds for epoch 0, of which 61 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:40 kafka | [2024-02-16 17:02:53,063] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 6a2107c9-1f65-47c8-af5c-8c5cc7111397 in Empty state. Created a new member id consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2-89480e25-06c6-437b-9c32-99b445dd9bb0 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:53,081] INFO [GroupCoordinator 1]: Preparing to rebalance group 6a2107c9-1f65-47c8-af5c-8c5cc7111397 in state PreparingRebalance with old generation 0 (__consumer_offsets-35) (reason: Adding new member consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2-89480e25-06c6-437b-9c32-99b445dd9bb0 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:54,207] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 9e3f40d3-8822-4376-bb50-8f7cd5d48f19 in Empty state. Created a new member id consumer-9e3f40d3-8822-4376-bb50-8f7cd5d48f19-2-c4e12a42-a42c-4c65-ad1a-91e44e30f0e2 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:54,218] INFO [GroupCoordinator 1]: Preparing to rebalance group 9e3f40d3-8822-4376-bb50-8f7cd5d48f19 in state PreparingRebalance with old generation 0 (__consumer_offsets-48) (reason: Adding new member consumer-9e3f40d3-8822-4376-bb50-8f7cd5d48f19-2-c4e12a42-a42c-4c65-ad1a-91e44e30f0e2 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:54,385] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 32e809a3-a7c0-4e13-b7a3-aa811059e0bc in Empty state. Created a new member id consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2-69b726ae-4f77-4dda-9d1d-cb3f6a755d38 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:54,390] INFO [GroupCoordinator 1]: Preparing to rebalance group 32e809a3-a7c0-4e13-b7a3-aa811059e0bc in state PreparingRebalance with old generation 0 (__consumer_offsets-20) (reason: Adding new member consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2-69b726ae-4f77-4dda-9d1d-cb3f6a755d38 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:56,101] INFO [GroupCoordinator 1]: Stabilized group 6a2107c9-1f65-47c8-af5c-8c5cc7111397 generation 1 (__consumer_offsets-35) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:56,161] INFO [GroupCoordinator 1]: Assignment received from leader consumer-6a2107c9-1f65-47c8-af5c-8c5cc7111397-2-89480e25-06c6-437b-9c32-99b445dd9bb0 for group 6a2107c9-1f65-47c8-af5c-8c5cc7111397 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:57,221] INFO [GroupCoordinator 1]: Stabilized group 9e3f40d3-8822-4376-bb50-8f7cd5d48f19 generation 1 (__consumer_offsets-48) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:57,241] INFO [GroupCoordinator 1]: Assignment received from leader consumer-9e3f40d3-8822-4376-bb50-8f7cd5d48f19-2-c4e12a42-a42c-4c65-ad1a-91e44e30f0e2 for group 9e3f40d3-8822-4376-bb50-8f7cd5d48f19 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:57,391] INFO [GroupCoordinator 1]: Stabilized group 32e809a3-a7c0-4e13-b7a3-aa811059e0bc generation 1 (__consumer_offsets-20) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:02:57,407] INFO [GroupCoordinator 1]: Assignment received from leader consumer-32e809a3-a7c0-4e13-b7a3-aa811059e0bc-2-69b726ae-4f77-4dda-9d1d-cb3f6a755d38 for group 32e809a3-a7c0-4e13-b7a3-aa811059e0bc for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:19,154] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 97317da4-3ba6-4109-8e73-20dc2312d257 in Empty state. Created a new member id consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2-8477fa0e-b745-40f3-b6b5-c74935b7f77f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:19,158] INFO [GroupCoordinator 1]: Preparing to rebalance group 97317da4-3ba6-4109-8e73-20dc2312d257 in state PreparingRebalance with old generation 0 (__consumer_offsets-20) (reason: Adding new member consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2-8477fa0e-b745-40f3-b6b5-c74935b7f77f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:22,159] INFO [GroupCoordinator 1]: Stabilized group 97317da4-3ba6-4109-8e73-20dc2312d257 generation 1 (__consumer_offsets-20) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:22,178] INFO [GroupCoordinator 1]: Assignment received from leader consumer-97317da4-3ba6-4109-8e73-20dc2312d257-2-8477fa0e-b745-40f3-b6b5-c74935b7f77f for group 97317da4-3ba6-4109-8e73-20dc2312d257 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:26,250] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:04:40 kafka | [2024-02-16 17:03:26,301] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 084a2e58-01c1-4612-9881-9e51d9ffa3ed in Empty state. Created a new member id consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3-af3a8725-a21a-4f60-8627-4eab7e3b5895 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:26,306] INFO [GroupCoordinator 1]: Preparing to rebalance group 084a2e58-01c1-4612-9881-9e51d9ffa3ed in state PreparingRebalance with old generation 0 (__consumer_offsets-21) (reason: Adding new member consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3-af3a8725-a21a-4f60-8627-4eab7e3b5895 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:26,366] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(eJk2oVHLQbGoRUMRmg6XYw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:04:40 kafka | [2024-02-16 17:03:26,366] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 17:04:40 kafka | [2024-02-16 17:03:26,367] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,367] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,367] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,376] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-3aa6bf85-4fda-43f2-b6c8-6e2b6568a88c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:26,377] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-3aa6bf85-4fda-43f2-b6c8-6e2b6568a88c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:26,378] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,378] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,378] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,379] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,379] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,379] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,380] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 5 from controller 1 epoch 1 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,381] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,381] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 17:04:40 kafka | [2024-02-16 17:03:26,381] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 5 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,384] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:03:26,385] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:03:26,386] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:03:26,386] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:03:26,386] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(eJk2oVHLQbGoRUMRmg6XYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,401] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 5 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,402] INFO [Broker id=1] Finished LeaderAndIsr request in 22ms correlationId 5 from controller 1 for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,404] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=eJk2oVHLQbGoRUMRmg6XYw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 5 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,405] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:26,407] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 6 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:27,792] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 02b3ddfc-6c0d-4750-8519-6e56d3cb3479 in Empty state. Created a new member id consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2-944c0d7e-1fbd-44f5-9ee5-8d53a0c0b80c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:27,797] INFO [GroupCoordinator 1]: Preparing to rebalance group 02b3ddfc-6c0d-4750-8519-6e56d3cb3479 in state PreparingRebalance with old generation 0 (__consumer_offsets-15) (reason: Adding new member consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2-944c0d7e-1fbd-44f5-9ee5-8d53a0c0b80c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:29,308] INFO [GroupCoordinator 1]: Stabilized group 084a2e58-01c1-4612-9881-9e51d9ffa3ed generation 1 (__consumer_offsets-21) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:29,322] INFO [GroupCoordinator 1]: Assignment received from leader consumer-084a2e58-01c1-4612-9881-9e51d9ffa3ed-3-af3a8725-a21a-4f60-8627-4eab7e3b5895 for group 084a2e58-01c1-4612-9881-9e51d9ffa3ed for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:29,378] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:29,382] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-3aa6bf85-4fda-43f2-b6c8-6e2b6568a88c for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:30,798] INFO [GroupCoordinator 1]: Stabilized group 02b3ddfc-6c0d-4750-8519-6e56d3cb3479 generation 1 (__consumer_offsets-15) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:30,809] INFO [GroupCoordinator 1]: Assignment received from leader consumer-02b3ddfc-6c0d-4750-8519-6e56d3cb3479-2-944c0d7e-1fbd-44f5-9ee5-8d53a0c0b80c for group 02b3ddfc-6c0d-4750-8519-6e56d3cb3479 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:41,643] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 0b0f93e1-9727-45a5-b97d-714a24b64a62 in Empty state. Created a new member id consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2-95b41a35-bc4e-4e57-9f7d-94d8dfef3b72 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:41,646] INFO [GroupCoordinator 1]: Preparing to rebalance group 0b0f93e1-9727-45a5-b97d-714a24b64a62 in state PreparingRebalance with old generation 0 (__consumer_offsets-28) (reason: Adding new member consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2-95b41a35-bc4e-4e57-9f7d-94d8dfef3b72 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:44,647] INFO [GroupCoordinator 1]: Stabilized group 0b0f93e1-9727-45a5-b97d-714a24b64a62 generation 1 (__consumer_offsets-28) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:44,667] INFO [GroupCoordinator 1]: Assignment received from leader consumer-0b0f93e1-9727-45a5-b97d-714a24b64a62-2-95b41a35-bc4e-4e57-9f7d-94d8dfef3b72 for group 0b0f93e1-9727-45a5-b97d-714a24b64a62 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:58,582] INFO Creating topic ac_element_msg with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:04:40 kafka | [2024-02-16 17:03:58,667] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:04:40 kafka | [2024-02-16 17:03:58,700] INFO [Controller id=1] New topics: [Set(ac_element_msg)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(ac_element_msg,Some(hRWtD2_FRnarfQOxyKTebA),Map(ac_element_msg-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:04:40 kafka | [2024-02-16 17:03:58,700] INFO [Controller id=1] New partition creation callback for ac_element_msg-0 (kafka.controller.KafkaController) 17:04:40 kafka | [2024-02-16 17:03:58,701] INFO [Controller id=1 epoch=1] Changed partition ac_element_msg-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,701] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,701] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition ac_element_msg-0 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,701] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,705] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group clamp-grp in Empty state. Created a new member id consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:58,707] INFO [GroupCoordinator 1]: Preparing to rebalance group clamp-grp in state PreparingRebalance with old generation 0 (__consumer_offsets-23) (reason: Adding new member consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:03:58,712] INFO [Controller id=1 epoch=1] Changed partition ac_element_msg-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,713] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='ac_element_msg', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition ac_element_msg-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,713] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,714] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,715] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition ac_element_msg-0 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,715] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,716] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 7 from controller 1 for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='ac_element_msg', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 7 from controller 1 epoch 1 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,717] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 7 from controller 1 epoch 1 starting the become-leader transition for partition ac_element_msg-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,717] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(ac_element_msg-0) (kafka.server.ReplicaFetcherManager) 17:04:40 kafka | [2024-02-16 17:03:58,717] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 7 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,724] INFO [Controller id=1] New topics: [HashSet(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(f0j-NTyLTd2ynli9qey0Hg),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:04:40 kafka | [2024-02-16 17:03:58,725] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 17:04:40 kafka | [2024-02-16 17:03:58,725] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,725] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,725] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,725] INFO [LogLoader partition=ac_element_msg-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:03:58,726] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,726] INFO Created log for partition ac_element_msg-0 in /var/lib/kafka/data/ac_element_msg-0 with properties {} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:03:58,727] INFO [Partition ac_element_msg-0 broker=1] No checkpointed highwatermark is found for partition ac_element_msg-0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:03:58,727] INFO [Partition ac_element_msg-0 broker=1] Log loaded for partition ac_element_msg-0 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:03:58,727] INFO [Broker id=1] Leader ac_element_msg-0 with topic id Some(hRWtD2_FRnarfQOxyKTebA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,740] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 7 from controller 1 epoch 1 for the become-leader transition for partition ac_element_msg-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,741] INFO [Broker id=1] Finished LeaderAndIsr request in 25ms correlationId 7 from controller 1 for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,741] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=hRWtD2_FRnarfQOxyKTebA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 7 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,744] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='ac_element_msg', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition ac_element_msg-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 8 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,744] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 8 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,744] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 8 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,745] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,745] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,745] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,746] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,746] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,746] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 9 from controller 1 for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,747] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 9 from controller 1 epoch 1 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,748] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 9 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,748] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 17:04:40 kafka | [2024-02-16 17:03:58,748] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 9 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,752] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:40 kafka | [2024-02-16 17:03:58,753] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 17:04:40 kafka | [2024-02-16 17:03:58,754] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:03:58,754] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 17:04:40 kafka | [2024-02-16 17:03:58,754] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(f0j-NTyLTd2ynli9qey0Hg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,778] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 9 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,779] INFO [Broker id=1] Finished LeaderAndIsr request in 33ms correlationId 9 from controller 1 for 1 partitions (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,780] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=f0j-NTyLTd2ynli9qey0Hg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 9 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,781] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 10 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,781] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 10 (state.change.logger) 17:04:40 kafka | [2024-02-16 17:03:58,781] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 10 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:40 kafka | [2024-02-16 17:04:01,709] INFO [GroupCoordinator 1]: Stabilized group clamp-grp generation 1 (__consumer_offsets-23) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:04:01,715] INFO [GroupCoordinator 1]: Assignment received from leader consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c for group clamp-grp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:04:21,945] INFO [GroupCoordinator 1]: Preparing to rebalance group clamp-grp in state PreparingRebalance with old generation 1 (__consumer_offsets-23) (reason: Removing member consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c on LeaveGroup; client reason: the consumer is being closed) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:04:21,945] INFO [GroupCoordinator 1]: Group clamp-grp with generation 2 is now empty (__consumer_offsets-23) (kafka.coordinator.group.GroupCoordinator) 17:04:40 kafka | [2024-02-16 17:04:21,948] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=consumer-clamp-grp-3-c95ab340-d38a-4d04-a341-ef58067afa4c, groupInstanceId=None, clientId=consumer-clamp-grp-3, clientHost=/172.17.0.13, sessionTimeoutMs=30000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group clamp-grp through explicit `LeaveGroup`; client reason: the consumer is being closed (kafka.coordinator.group.GroupCoordinator) 17:04:40 ++ echo 'Tearing down containers...' 17:04:40 Tearing down containers... 17:04:40 ++ docker-compose down -v --remove-orphans 17:04:40 Stopping policy-clamp-runtime-acm ... 17:04:40 Stopping policy-apex-pdp ... 17:04:40 Stopping policy-clamp-ac-pf-ppnt ... 17:04:40 Stopping policy-pap ... 17:04:40 Stopping policy-api ... 17:04:40 Stopping policy-clamp-ac-k8s-ppnt ... 17:04:40 Stopping policy-clamp-ac-sim-ppnt ... 17:04:40 Stopping policy-clamp-ac-http-ppnt ... 17:04:40 Stopping kafka ... 17:04:40 Stopping compose_zookeeper_1 ... 17:04:40 Stopping simulator ... 17:04:40 Stopping mariadb ... 17:04:51 Stopping policy-clamp-runtime-acm ... done 17:05:01 Stopping policy-clamp-ac-sim-ppnt ... done 17:05:01 Stopping policy-clamp-ac-pf-ppnt ... done 17:05:01 Stopping policy-clamp-ac-http-ppnt ... done 17:05:02 Stopping policy-clamp-ac-k8s-ppnt ... done 17:05:02 Stopping policy-apex-pdp ... done 17:05:13 Stopping policy-pap ... done 17:05:13 Stopping simulator ... done 17:05:15 Stopping mariadb ... done 17:05:16 Stopping kafka ... done 17:05:17 Stopping compose_zookeeper_1 ... done 17:05:23 Stopping policy-api ... done 17:05:23 Removing policy-clamp-runtime-acm ... 17:05:23 Removing policy-apex-pdp ... 17:05:23 Removing policy-clamp-ac-pf-ppnt ... 17:05:23 Removing policy-pap ... 17:05:23 Removing policy-api ... 17:05:23 Removing policy-clamp-ac-k8s-ppnt ... 17:05:23 Removing policy-clamp-ac-sim-ppnt ... 17:05:23 Removing policy-clamp-ac-http-ppnt ... 17:05:23 Removing policy-db-migrator ... 17:05:23 Removing kafka ... 17:05:23 Removing compose_zookeeper_1 ... 17:05:23 Removing simulator ... 17:05:23 Removing mariadb ... 17:05:23 Removing policy-api ... done 17:05:23 Removing policy-clamp-ac-pf-ppnt ... done 17:05:23 Removing policy-clamp-ac-http-ppnt ... done 17:05:23 Removing policy-db-migrator ... done 17:05:23 Removing mariadb ... done 17:05:23 Removing policy-clamp-ac-k8s-ppnt ... done 17:05:23 Removing policy-clamp-ac-sim-ppnt ... done 17:05:23 Removing policy-clamp-runtime-acm ... done 17:05:23 Removing policy-apex-pdp ... done 17:05:23 Removing simulator ... done 17:05:23 Removing kafka ... done 17:05:23 Removing compose_zookeeper_1 ... done 17:05:23 Removing policy-pap ... done 17:05:23 Removing network compose_default 17:05:24 ++ cd /w/workspace/policy-clamp-master-project-csit-clamp 17:05:24 + load_set 17:05:24 + _setopts=hxB 17:05:24 ++ tr : ' ' 17:05:24 ++ echo braceexpand:hashall:interactive-comments:xtrace 17:05:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:05:24 + set +o braceexpand 17:05:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:05:24 + set +o hashall 17:05:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:05:24 + set +o interactive-comments 17:05:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 17:05:24 + set +o xtrace 17:05:24 ++ echo hxB 17:05:24 ++ sed 's/./& /g' 17:05:24 + for i in $(echo "$_setopts" | sed 's/./& /g') 17:05:24 + set +h 17:05:24 + for i in $(echo "$_setopts" | sed 's/./& /g') 17:05:24 + set +x 17:05:24 + [[ -n /tmp/tmp.0Q219FRrYS ]] 17:05:24 + rsync -av /tmp/tmp.0Q219FRrYS/ /w/workspace/policy-clamp-master-project-csit-clamp/csit/archives/clamp 17:05:24 sending incremental file list 17:05:24 ./ 17:05:24 log.html 17:05:24 output.xml 17:05:24 report.html 17:05:24 testplan.txt 17:05:24 17:05:24 sent 789,119 bytes received 95 bytes 1,578,428.00 bytes/sec 17:05:24 total size is 788,605 speedup is 1.00 17:05:24 + rm -rf /w/workspace/policy-clamp-master-project-csit-clamp/models 17:05:24 + exit 0 17:05:24 $ ssh-agent -k 17:05:24 unset SSH_AUTH_SOCK; 17:05:24 unset SSH_AGENT_PID; 17:05:24 echo Agent pid 2143 killed; 17:05:24 [ssh-agent] Stopped. 17:05:24 Robot results publisher started... 17:05:24 INFO: Checking test criticality is deprecated and will be dropped in a future release! 17:05:24 -Parsing output xml: 17:05:24 Done! 17:05:24 WARNING! Could not find file: **/log.html 17:05:24 WARNING! Could not find file: **/report.html 17:05:24 -Copying log files to build dir: 17:05:25 Done! 17:05:25 -Assigning results to build: 17:05:25 Done! 17:05:25 -Checking thresholds: 17:05:25 Done! 17:05:25 Done publishing Robot results. 17:05:25 [PostBuildScript] - [INFO] Executing post build scripts. 17:05:25 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins9013672567522195430.sh 17:05:25 ---> sysstat.sh 17:05:25 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins12064605511040838868.sh 17:05:25 ---> package-listing.sh 17:05:25 ++ facter osfamily 17:05:25 ++ tr '[:upper:]' '[:lower:]' 17:05:25 + OS_FAMILY=debian 17:05:25 + workspace=/w/workspace/policy-clamp-master-project-csit-clamp 17:05:25 + START_PACKAGES=/tmp/packages_start.txt 17:05:25 + END_PACKAGES=/tmp/packages_end.txt 17:05:25 + DIFF_PACKAGES=/tmp/packages_diff.txt 17:05:25 + PACKAGES=/tmp/packages_start.txt 17:05:25 + '[' /w/workspace/policy-clamp-master-project-csit-clamp ']' 17:05:25 + PACKAGES=/tmp/packages_end.txt 17:05:25 + case "${OS_FAMILY}" in 17:05:25 + dpkg -l 17:05:25 + grep '^ii' 17:05:25 + '[' -f /tmp/packages_start.txt ']' 17:05:25 + '[' -f /tmp/packages_end.txt ']' 17:05:25 + diff /tmp/packages_start.txt /tmp/packages_end.txt 17:05:25 + '[' /w/workspace/policy-clamp-master-project-csit-clamp ']' 17:05:25 + mkdir -p /w/workspace/policy-clamp-master-project-csit-clamp/archives/ 17:05:25 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-clamp-master-project-csit-clamp/archives/ 17:05:25 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins4130783798441721827.sh 17:05:25 ---> capture-instance-metadata.sh 17:05:25 Setup pyenv: 17:05:26 system 17:05:26 3.8.13 17:05:26 3.9.13 17:05:26 * 3.10.6 (set by /w/workspace/policy-clamp-master-project-csit-clamp/.python-version) 17:05:26 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Qefd from file:/tmp/.os_lf_venv 17:05:27 lf-activate-venv(): INFO: Installing: lftools 17:05:38 lf-activate-venv(): INFO: Adding /tmp/venv-Qefd/bin to PATH 17:05:38 INFO: Running in OpenStack, capturing instance metadata 17:05:38 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins7581272149896762731.sh 17:05:38 provisioning config files... 17:05:38 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-clamp-master-project-csit-clamp@tmp/config6397541779241743392tmp 17:05:38 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 17:05:38 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 17:05:38 [EnvInject] - Injecting environment variables from a build step. 17:05:38 [EnvInject] - Injecting as environment variables the properties content 17:05:38 SERVER_ID=logs 17:05:38 17:05:38 [EnvInject] - Variables injected successfully. 17:05:38 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins4351591394465952203.sh 17:05:38 ---> create-netrc.sh 17:05:38 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins11898755941529134809.sh 17:05:38 ---> python-tools-install.sh 17:05:38 Setup pyenv: 17:05:38 system 17:05:38 3.8.13 17:05:38 3.9.13 17:05:38 * 3.10.6 (set by /w/workspace/policy-clamp-master-project-csit-clamp/.python-version) 17:05:39 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Qefd from file:/tmp/.os_lf_venv 17:05:40 lf-activate-venv(): INFO: Installing: lftools 17:05:48 lf-activate-venv(): INFO: Adding /tmp/venv-Qefd/bin to PATH 17:05:48 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins4153233462517979137.sh 17:05:48 ---> sudo-logs.sh 17:05:48 Archiving 'sudo' log.. 17:05:48 [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins12146074620130454664.sh 17:05:48 ---> job-cost.sh 17:05:48 Setup pyenv: 17:05:48 system 17:05:49 3.8.13 17:05:49 3.9.13 17:05:49 * 3.10.6 (set by /w/workspace/policy-clamp-master-project-csit-clamp/.python-version) 17:05:49 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Qefd from file:/tmp/.os_lf_venv 17:05:50 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 17:05:58 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 17:05:58 lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. 17:05:58 lf-activate-venv(): INFO: Adding /tmp/venv-Qefd/bin to PATH 17:05:58 INFO: No Stack... 17:05:58 INFO: Retrieving Pricing Info for: v3-standard-8 17:05:59 INFO: Archiving Costs 17:05:59 [policy-clamp-master-project-csit-clamp] $ /bin/bash -l /tmp/jenkins16098859291460854767.sh 17:05:59 ---> logs-deploy.sh 17:05:59 Setup pyenv: 17:05:59 system 17:05:59 3.8.13 17:05:59 3.9.13 17:05:59 * 3.10.6 (set by /w/workspace/policy-clamp-master-project-csit-clamp/.python-version) 17:05:59 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Qefd from file:/tmp/.os_lf_venv 17:06:01 lf-activate-venv(): INFO: Installing: lftools 17:06:10 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 17:06:10 python-openstackclient 6.5.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. 17:06:11 lf-activate-venv(): INFO: Adding /tmp/venv-Qefd/bin to PATH 17:06:11 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-clamp-master-project-csit-clamp/1127 17:06:11 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 17:06:12 Archives upload complete. 17:06:12 INFO: archiving logs to Nexus 17:06:13 ---> uname -a: 17:06:13 Linux prd-ubuntu1804-docker-8c-8g-5887 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 17:06:13 17:06:13 17:06:13 ---> lscpu: 17:06:13 Architecture: x86_64 17:06:13 CPU op-mode(s): 32-bit, 64-bit 17:06:13 Byte Order: Little Endian 17:06:13 CPU(s): 8 17:06:13 On-line CPU(s) list: 0-7 17:06:13 Thread(s) per core: 1 17:06:13 Core(s) per socket: 1 17:06:13 Socket(s): 8 17:06:13 NUMA node(s): 1 17:06:13 Vendor ID: AuthenticAMD 17:06:13 CPU family: 23 17:06:13 Model: 49 17:06:13 Model name: AMD EPYC-Rome Processor 17:06:13 Stepping: 0 17:06:13 CPU MHz: 2799.998 17:06:13 BogoMIPS: 5599.99 17:06:13 Virtualization: AMD-V 17:06:13 Hypervisor vendor: KVM 17:06:13 Virtualization type: full 17:06:13 L1d cache: 32K 17:06:13 L1i cache: 32K 17:06:13 L2 cache: 512K 17:06:13 L3 cache: 16384K 17:06:13 NUMA node0 CPU(s): 0-7 17:06:13 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 17:06:13 17:06:13 17:06:13 ---> nproc: 17:06:13 8 17:06:13 17:06:13 17:06:13 ---> df -h: 17:06:13 Filesystem Size Used Avail Use% Mounted on 17:06:13 udev 16G 0 16G 0% /dev 17:06:13 tmpfs 3.2G 708K 3.2G 1% /run 17:06:13 /dev/vda1 155G 14G 142G 9% / 17:06:13 tmpfs 16G 0 16G 0% /dev/shm 17:06:13 tmpfs 5.0M 0 5.0M 0% /run/lock 17:06:13 tmpfs 16G 0 16G 0% /sys/fs/cgroup 17:06:13 /dev/vda15 105M 4.4M 100M 5% /boot/efi 17:06:13 tmpfs 3.2G 0 3.2G 0% /run/user/1001 17:06:13 17:06:13 17:06:13 ---> free -m: 17:06:13 total used free shared buff/cache available 17:06:13 Mem: 32167 848 25337 0 5981 30862 17:06:13 Swap: 1023 0 1023 17:06:13 17:06:13 17:06:13 ---> ip addr: 17:06:13 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 17:06:13 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 17:06:13 inet 127.0.0.1/8 scope host lo 17:06:13 valid_lft forever preferred_lft forever 17:06:13 inet6 ::1/128 scope host 17:06:13 valid_lft forever preferred_lft forever 17:06:13 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 17:06:13 link/ether fa:16:3e:72:67:35 brd ff:ff:ff:ff:ff:ff 17:06:13 inet 10.30.106.16/23 brd 10.30.107.255 scope global dynamic ens3 17:06:13 valid_lft 85833sec preferred_lft 85833sec 17:06:13 inet6 fe80::f816:3eff:fe72:6735/64 scope link 17:06:13 valid_lft forever preferred_lft forever 17:06:13 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 17:06:13 link/ether 02:42:31:35:48:f4 brd ff:ff:ff:ff:ff:ff 17:06:13 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 17:06:13 valid_lft forever preferred_lft forever 17:06:13 17:06:13 17:06:13 ---> sar -b -r -n DEV: 17:06:13 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-5887) 02/16/24 _x86_64_ (8 CPU) 17:06:13 17:06:13 16:56:48 LINUX RESTART (8 CPU) 17:06:13 17:06:13 16:57:01 tps rtps wtps bread/s bwrtn/s 17:06:13 16:58:02 138.98 74.14 64.84 5303.78 57827.70 17:06:13 16:59:01 107.25 13.49 93.76 989.80 32026.44 17:06:13 17:00:01 103.85 23.03 80.82 2762.15 30210.73 17:06:13 17:01:01 138.89 0.15 138.74 20.80 69983.94 17:06:13 17:02:01 52.63 0.10 52.53 6.80 51818.33 17:06:13 17:03:01 258.64 13.60 245.04 774.34 46663.94 17:06:13 17:04:01 29.51 0.42 29.10 33.99 23687.45 17:06:13 17:05:01 27.03 0.02 27.01 2.40 23362.86 17:06:13 17:06:01 83.25 1.93 81.32 111.98 7502.92 17:06:13 Average: 104.44 14.10 90.35 1112.02 38131.89 17:06:13 17:06:13 16:57:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:06:13 16:58:02 30319460 31679628 2619760 7.95 51460 1629976 1458660 4.29 861504 1486424 57864 17:06:13 16:59:01 30026456 31692028 2912764 8.84 75432 1895000 1461720 4.30 884100 1721192 96088 17:06:13 17:00:01 28710748 31662108 4228472 12.84 98956 3124524 1583888 4.66 978440 2874088 1064760 17:06:13 17:01:01 27650188 31676052 5289032 16.06 120376 4118456 1415244 4.16 982480 3864132 435836 17:06:13 17:02:01 26046344 31672176 6892876 20.93 128032 5654532 1471692 4.33 995256 5398960 705636 17:06:13 17:03:01 23725152 29397400 9214068 27.97 137396 5674620 9505640 27.97 3459460 5168584 232 17:06:13 17:04:01 21671420 27462068 11267800 34.21 138380 5787404 12065536 35.50 5496360 5166240 1484 17:06:13 17:05:01 23661640 29482376 9277580 28.17 138972 5816128 8566404 25.20 3521136 5162632 652 17:06:13 17:06:01 25980020 31636748 6959200 21.13 142832 5664240 1549472 4.56 1401944 5033660 31136 17:06:13 Average: 26421270 30706732 6517950 19.79 114648 4373876 4342028 12.78 2064520 3986212 265965 17:06:13 17:06:13 16:57:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:06:13 16:58:02 ens3 402.58 247.21 944.10 57.91 0.00 0.00 0.00 0.00 17:06:13 16:58:02 lo 1.80 1.80 0.18 0.18 0.00 0.00 0.00 0.00 17:06:13 16:58:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 16:59:01 ens3 48.77 32.16 915.71 5.56 0.00 0.00 0.00 0.00 17:06:13 16:59:01 lo 1.15 1.15 0.12 0.12 0.00 0.00 0.00 0.00 17:06:13 16:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 17:00:01 br-844c08b3ab35 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 17:00:01 ens3 543.49 279.82 10080.68 24.45 0.00 0.00 0.00 0.00 17:06:13 17:00:01 lo 6.20 6.20 0.64 0.64 0.00 0.00 0.00 0.00 17:06:13 17:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 17:01:01 br-844c08b3ab35 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 17:01:01 ens3 248.84 128.61 7131.03 9.88 0.00 0.00 0.00 0.00 17:06:13 17:01:01 lo 4.00 4.00 0.38 0.38 0.00 0.00 0.00 0.00 17:06:13 17:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 17:02:01 br-844c08b3ab35 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 17:02:01 ens3 524.94 276.04 19506.13 19.22 0.00 0.00 0.00 0.00 17:06:13 17:02:01 lo 4.73 4.73 0.43 0.43 0.00 0.00 0.00 0.00 17:06:13 17:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 17:03:01 veth201014b 9.33 9.62 1.33 1.32 0.00 0.00 0.00 0.00 17:06:13 17:03:01 br-844c08b3ab35 0.42 0.20 0.01 0.02 0.00 0.00 0.00 0.00 17:06:13 17:03:01 ens3 4.52 2.55 1.08 0.60 0.00 0.00 0.00 0.00 17:06:13 17:03:01 veth64fa0f8 1.02 1.40 0.12 0.14 0.00 0.00 0.00 0.00 17:06:13 17:04:01 veth201014b 35.01 27.25 7.40 4.94 0.00 0.00 0.00 0.00 17:06:13 17:04:01 br-844c08b3ab35 1.17 1.33 1.61 0.98 0.00 0.00 0.00 0.00 17:06:13 17:04:01 ens3 8.30 5.25 2.10 2.12 0.00 0.00 0.00 0.00 17:06:13 17:04:01 veth64fa0f8 3.80 5.45 0.60 1.08 0.00 0.00 0.00 0.00 17:06:13 17:05:01 veth201014b 42.99 31.36 3.99 4.74 0.00 0.00 0.00 0.00 17:06:13 17:05:01 br-844c08b3ab35 1.47 2.00 2.73 0.19 0.00 0.00 0.00 0.00 17:06:13 17:05:01 ens3 17.51 15.25 7.05 24.54 0.00 0.00 0.00 0.00 17:06:13 17:05:01 veth64fa0f8 4.00 5.63 0.63 0.56 0.00 0.00 0.00 0.00 17:06:13 17:06:01 ens3 51.31 41.79 115.60 17.14 0.00 0.00 0.00 0.00 17:06:13 17:06:01 lo 31.29 31.29 7.84 7.84 0.00 0.00 0.00 0.00 17:06:13 17:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 Average: ens3 205.89 114.46 4307.05 17.96 0.00 0.00 0.00 0.00 17:06:13 Average: lo 3.15 3.15 0.85 0.85 0.00 0.00 0.00 0.00 17:06:13 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:13 17:06:13 17:06:13 ---> sar -P ALL: 17:06:13 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-5887) 02/16/24 _x86_64_ (8 CPU) 17:06:13 17:06:13 16:56:48 LINUX RESTART (8 CPU) 17:06:13 17:06:13 16:57:01 CPU %user %nice %system %iowait %steal %idle 17:06:13 16:58:02 all 7.64 0.00 1.11 7.91 0.04 83.30 17:06:13 16:58:02 0 4.62 0.00 1.10 11.78 0.03 82.46 17:06:13 16:58:02 1 7.35 0.00 0.95 0.92 0.03 90.75 17:06:13 16:58:02 2 3.60 0.00 0.63 0.83 0.03 94.90 17:06:13 16:58:02 3 4.63 0.00 0.59 0.65 0.05 94.08 17:06:13 16:58:02 4 6.03 0.00 0.52 0.30 0.02 93.13 17:06:13 16:58:02 5 4.34 0.00 2.18 33.85 0.03 59.59 17:06:13 16:58:02 6 14.72 0.00 1.24 13.55 0.05 70.45 17:06:13 16:58:02 7 15.84 0.00 1.62 1.52 0.05 80.97 17:06:13 16:59:01 all 8.89 0.00 0.78 5.02 0.04 85.28 17:06:13 16:59:01 0 10.38 0.00 0.92 0.10 0.03 88.57 17:06:13 16:59:01 1 22.47 0.00 1.68 3.59 0.08 72.18 17:06:13 16:59:01 2 1.17 0.00 0.37 5.33 0.00 93.13 17:06:13 16:59:01 3 9.25 0.00 1.04 8.52 0.03 81.16 17:06:13 16:59:01 4 6.66 0.00 0.56 0.12 0.02 92.64 17:06:13 16:59:01 5 0.10 0.00 0.24 19.23 0.02 80.41 17:06:13 16:59:01 6 1.85 0.00 0.32 0.41 0.03 97.38 17:06:13 16:59:01 7 19.27 0.00 1.12 2.86 0.05 76.70 17:06:13 17:00:01 all 10.14 0.00 1.93 4.53 0.05 83.35 17:06:13 17:00:01 0 9.48 0.00 2.15 0.37 0.07 87.93 17:06:13 17:00:01 1 5.31 0.00 2.23 6.70 0.05 85.70 17:06:13 17:00:01 2 10.54 0.00 2.01 0.13 0.05 87.26 17:06:13 17:00:01 3 3.29 0.00 1.53 18.67 0.05 76.46 17:06:13 17:00:01 4 4.14 0.00 1.19 1.19 0.03 93.45 17:06:13 17:00:01 5 22.62 0.00 2.00 2.40 0.08 72.90 17:06:13 17:00:01 6 6.54 0.00 1.61 0.62 0.05 91.17 17:06:13 17:00:01 7 19.22 0.00 2.74 6.10 0.07 71.88 17:06:13 17:01:01 all 5.41 0.00 2.60 11.97 0.04 79.98 17:06:13 17:01:01 0 6.27 0.00 2.58 0.00 0.03 91.12 17:06:13 17:01:01 1 5.88 0.00 3.09 24.13 0.07 66.82 17:06:13 17:01:01 2 5.31 0.00 2.90 14.88 0.05 76.86 17:06:13 17:01:01 3 5.47 0.00 2.37 1.53 0.03 90.60 17:06:13 17:01:01 4 6.12 0.00 3.01 26.44 0.05 64.38 17:06:13 17:01:01 5 5.52 0.00 1.99 7.05 0.03 85.40 17:06:13 17:01:01 6 4.10 0.00 3.09 0.62 0.03 92.16 17:06:13 17:01:01 7 4.58 0.00 1.80 21.13 0.03 72.46 17:06:13 17:02:01 all 6.10 0.00 2.63 18.02 0.04 73.20 17:06:13 17:02:01 0 4.80 0.00 1.55 46.53 0.03 47.09 17:06:13 17:02:01 1 8.17 0.00 2.58 16.84 0.03 72.37 17:06:13 17:02:01 2 7.31 0.00 2.95 0.64 0.03 89.07 17:06:13 17:02:01 3 6.23 0.00 2.37 33.50 0.03 57.87 17:06:13 17:02:01 4 5.35 0.00 2.30 8.88 0.05 83.42 17:06:13 17:02:01 5 4.88 0.00 2.28 27.38 0.05 65.40 17:06:13 17:02:01 6 5.95 0.00 3.32 2.56 0.05 88.13 17:06:13 17:02:01 7 6.16 0.00 3.64 7.87 0.03 82.30 17:06:13 17:03:01 all 37.45 0.00 4.88 7.96 0.11 49.61 17:06:13 17:03:01 0 36.25 0.00 4.96 4.16 0.08 54.54 17:06:13 17:03:01 1 39.10 0.00 4.76 1.91 0.14 54.09 17:06:13 17:03:01 2 36.69 0.00 4.98 1.46 0.10 56.76 17:06:13 17:03:01 3 36.83 0.00 4.66 0.79 0.12 57.60 17:06:13 17:03:01 4 39.83 0.00 4.95 1.94 0.12 53.16 17:06:13 17:03:01 5 40.75 0.00 5.35 36.82 0.10 16.98 17:06:13 17:03:01 6 35.68 0.00 4.83 1.38 0.12 58.00 17:06:13 17:03:01 7 34.44 0.00 4.54 15.16 0.10 45.75 17:06:13 17:04:01 all 40.35 0.00 4.07 2.82 0.12 52.63 17:06:13 17:04:01 0 36.90 0.00 4.11 11.98 0.12 46.89 17:06:13 17:04:01 1 48.39 0.00 4.83 0.13 0.12 46.52 17:06:13 17:04:01 2 43.01 0.00 4.35 7.57 0.12 44.96 17:06:13 17:04:01 3 41.40 0.00 3.97 0.50 0.12 54.01 17:06:13 17:04:01 4 35.30 0.00 3.65 0.86 0.15 60.04 17:06:13 17:04:01 5 36.29 0.00 3.84 0.03 0.12 59.72 17:06:13 17:04:01 6 44.34 0.00 4.18 1.33 0.12 50.03 17:06:13 17:04:01 7 37.16 0.00 3.63 0.13 0.12 58.96 17:06:13 17:05:01 all 5.97 0.00 0.89 1.92 0.05 91.17 17:06:13 17:05:01 0 6.49 0.00 0.77 0.03 0.05 92.65 17:06:13 17:05:01 1 6.27 0.00 1.02 0.00 0.07 92.64 17:06:13 17:05:01 2 5.16 0.00 1.11 14.50 0.05 79.19 17:06:13 17:05:01 3 5.51 0.00 0.84 0.03 0.05 93.57 17:06:13 17:05:01 4 6.16 0.00 0.92 0.22 0.05 92.65 17:06:13 17:05:01 5 6.44 0.00 0.84 0.18 0.05 92.49 17:06:13 17:05:01 6 5.64 0.00 0.87 0.15 0.03 93.31 17:06:13 17:05:01 7 6.08 0.00 0.75 0.27 0.05 92.85 17:06:13 17:06:01 all 7.34 0.00 0.83 1.89 0.03 89.91 17:06:13 17:06:01 0 1.02 0.00 0.55 0.58 0.02 97.83 17:06:13 17:06:01 1 2.85 0.00 0.62 0.82 0.03 95.68 17:06:13 17:06:01 2 1.88 0.00 0.78 8.65 0.03 88.66 17:06:13 17:06:01 3 13.93 0.00 1.10 0.70 0.03 84.23 17:06:13 17:06:01 4 20.11 0.00 1.19 0.80 0.05 77.85 17:06:13 17:06:01 5 3.06 0.00 0.72 1.09 0.02 95.12 17:06:13 17:06:01 6 13.90 0.00 0.95 0.67 0.05 84.43 17:06:13 17:06:01 7 2.02 0.00 0.73 1.77 0.02 95.46 17:06:13 Average: all 14.35 0.00 2.19 6.89 0.06 76.52 17:06:13 Average: 0 12.89 0.00 2.07 8.39 0.05 76.59 17:06:13 Average: 1 16.15 0.00 2.42 6.11 0.07 75.25 17:06:13 Average: 2 12.71 0.00 2.23 6.00 0.05 79.01 17:06:13 Average: 3 14.05 0.00 2.05 7.20 0.06 76.65 17:06:13 Average: 4 14.39 0.00 2.03 4.51 0.06 79.01 17:06:13 Average: 5 13.78 0.00 2.16 14.20 0.06 69.81 17:06:13 Average: 6 14.76 0.00 2.27 2.37 0.06 80.55 17:06:13 Average: 7 16.04 0.00 2.28 6.31 0.06 75.30 17:06:13 17:06:13 17:06:13