Started by upstream project "policy-docker-master-merge-java" build number 331 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/136986 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-12484 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-clamp-master-project-csit-clamp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-l5NG3dJ4kw1v/agent.2126 SSH_AGENT_PID=2128 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-clamp-master-project-csit-clamp@tmp/private_key_16976043584845451425.key (/w/workspace/policy-clamp-master-project-csit-clamp@tmp/private_key_16976043584845451425.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-clamp-master-project-csit-clamp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision f2e4da7e296548fb3980fd212e3a67dc83254e1d (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f f2e4da7e296548fb3980fd212e3a67dc83254e1d # timeout=30 Commit message: "Add kafka support in Policy CSIT" > git rev-list --no-walk b9d434aeef048c4ea2cf9bd8a27681d375ec5b85 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins14279109291731109591.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-5bqd lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-5bqd/bin to PATH Generating Requirements File Python 3.10.6 pip 23.3.2 from /tmp/venv-5bqd/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.1 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.2 boto3==1.34.19 botocore==1.34.19 bs4==0.0.1 cachetools==5.3.2 certifi==2023.11.17 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.4.2 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.26.2 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.20.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.3 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==6.3.0 packaging==23.2 pbr==6.0.0 platformdirs==4.1.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.1.1 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2023.3.post1 PyYAML==6.0.1 referencing==0.32.1 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-clamp-master-project-csit-clamp] $ /bin/sh /tmp/jenkins13712484322223828787.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-clamp-master-project-csit-clamp] $ /bin/sh -xe /tmp/jenkins4499326111289870818.sh + /w/workspace/policy-clamp-master-project-csit-clamp/csit/run-project-csit.sh clamp + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-clamp-master-project-csit-clamp ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin + export SCRIPTS=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts + SCRIPTS=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=clamp + PROJECT=clamp + cd /w/workspace/policy-clamp-master-project-csit-clamp + rm -rf /w/workspace/policy-clamp-master-project-csit-clamp/csit/archives/clamp + mkdir -p /w/workspace/policy-clamp-master-project-csit-clamp/csit/archives/clamp + source_safely /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-clamp-master-project-csit-clamp ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.SpRd5mOTia ++ echo ROBOT_VENV=/tmp/tmp.SpRd5mOTia +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.SpRd5mOTia ++ source /tmp/tmp.SpRd5mOTia/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.SpRd5mOTia +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin +++ PATH=/tmp/tmp.SpRd5mOTia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.SpRd5mOTia) ' '!=' x ']' +++ PS1='(tmp.SpRd5mOTia) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.2 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.SpRd5mOTia/src/onap ++ rm -rf /tmp/tmp.SpRd5mOTia/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.2 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.SpRd5mOTia/bin/activate + '[' -z /tmp/tmp.SpRd5mOTia/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.SpRd5mOTia/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.SpRd5mOTia ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin ++ PATH=/tmp/tmp.SpRd5mOTia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-clamp-master-project-csit-clamp/csit:/w/workspace/policy-clamp-master-project-csit-clamp/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.SpRd5mOTia) ' ++ '[' 'x(tmp.SpRd5mOTia) ' '!=' x ']' ++ PS1='(tmp.SpRd5mOTia) (tmp.SpRd5mOTia) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.qcL4gSdRzk + cd /tmp/tmp.qcL4gSdRzk + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh + '[' -f /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh ']' + echo 'Running setup script /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh' Running setup script /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh + source_safely /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh + '[' -z /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/setup-clamp.sh ++ source /w/workspace/policy-clamp-master-project-csit-clamp/compose/start-compose.sh policy-clamp-runtime-acm +++ '[' -z /w/workspace/policy-clamp-master-project-csit-clamp ']' +++ COMPOSE_FOLDER=/w/workspace/policy-clamp-master-project-csit-clamp/compose +++ grafana=false +++ gui=false +++ [[ 1 -gt 0 ]] +++ key=policy-clamp-runtime-acm +++ case $key in +++ echo policy-clamp-runtime-acm policy-clamp-runtime-acm +++ component=policy-clamp-runtime-acm +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-clamp-master-project-csit-clamp/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z clamp ']' +++ '[' -n policy-clamp-runtime-acm ']' +++ '[' policy-clamp-runtime-acm == logs ']' +++ '[' false = true ']' +++ '[' false = true ']' +++ echo 'Starting policy-clamp-runtime-acm application' Starting policy-clamp-runtime-acm application +++ docker-compose up -d policy-clamp-runtime-acm Creating network "compose_default" with the default driver Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-clamp-ac-sim-ppnt (nexus3.onap.org:10001/onap/policy-clamp-ac-sim-ppnt:7.1.1-SNAPSHOT)... 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-ac-sim-ppnt Digest: sha256:823fe333f432aedab83551ba7e782cfe7436edb89a5c1c624094a136356afc03 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-ac-sim-ppnt:7.1.1-SNAPSHOT Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:c8de5cbf268fa5f6c292c92e9259f82da5e78c0ddfa1fbf3e9257a00c3930808 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.0)... 3.1.0: Pulling from onap/policy-api Digest: sha256:5c4c03761af8683035bdfb23ad490044d6b151e5d5939a59b93a6064a761dbbd Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.0 Pulling policy-clamp-ac-pf-ppnt (nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:7.1.1-SNAPSHOT)... 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-ac-pf-ppnt Digest: sha256:7aa2fa0952fcb60258a8cdbaca8062f5969e525df3afbbe7ac6552564ad475cc Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:7.1.1-SNAPSHOT Pulling policy-clamp-ac-k8s-ppnt (nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:7.1.1-SNAPSHOT)... 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-ac-k8s-ppnt Digest: sha256:ad3f891b6dc23a326ac45f123e9a0882f58a92724fc90c88bf743e4083d0a6a9 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:7.1.1-SNAPSHOT Pulling policy-clamp-ac-http-ppnt (nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:7.1.1-SNAPSHOT)... 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-ac-http-ppnt Digest: sha256:7f033100f186c1e8a4c5cf40d3230156cdacaf0ba6470e412ec11ea22bd98f11 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:7.1.1-SNAPSHOT Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:37c4361d99c3f559835790653cd75fd194587e3e5951cbeb5086d1c0b8af6b74 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Pulling policy-clamp-runtime-acm (nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:7.1.1-SNAPSHOT)... 7.1.1-SNAPSHOT: Pulling from onap/policy-clamp-runtime-acm Digest: sha256:2e5083e3a307efc47eaf0adf86e15db311c1845f188437d9be37365c9658c355 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:7.1.1-SNAPSHOT Creating simulator ... Creating mariadb ... Creating compose_zookeeper_1 ... Creating compose_zookeeper_1 ... done Creating kafka ... Creating mariadb ... done Creating policy-db-migrator ... Creating simulator ... done Creating kafka ... done Creating policy-clamp-ac-k8s-ppnt ... Creating policy-clamp-ac-sim-ppnt ... Creating policy-clamp-ac-http-ppnt ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-clamp-ac-sim-ppnt ... done Creating policy-clamp-ac-http-ppnt ... done Creating policy-clamp-ac-k8s-ppnt ... done Creating policy-api ... done Creating policy-pap ... Creating policy-clamp-ac-pf-ppnt ... Creating policy-clamp-ac-pf-ppnt ... done Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done Creating policy-clamp-runtime-acm ... Creating policy-clamp-runtime-acm ... done +++ cd /w/workspace/policy-clamp-master-project-csit-clamp ++ sleep 10 ++ unset http_proxy https_proxy ++ /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/scripts/wait_for_rest.sh localhost 30007 Waiting for REST to come up on localhost port 30007... NAMES STATUS policy-clamp-runtime-acm Up 10 seconds policy-apex-pdp Up 11 seconds policy-pap Up 12 seconds policy-clamp-ac-pf-ppnt Up 13 seconds policy-api Up 14 seconds policy-clamp-ac-http-ppnt Up 16 seconds policy-clamp-ac-sim-ppnt Up 18 seconds policy-clamp-ac-k8s-ppnt Up 16 seconds kafka Up 21 seconds compose_zookeeper_1 Up 23 seconds simulator Up 21 seconds mariadb Up 22 seconds NAMES STATUS policy-clamp-runtime-acm Up 15 seconds policy-apex-pdp Up 16 seconds policy-pap Up 17 seconds policy-clamp-ac-pf-ppnt Up 18 seconds policy-api Up 19 seconds policy-clamp-ac-http-ppnt Up 22 seconds policy-clamp-ac-sim-ppnt Up 23 seconds policy-clamp-ac-k8s-ppnt Up 21 seconds kafka Up 26 seconds compose_zookeeper_1 Up 28 seconds simulator Up 26 seconds mariadb Up 27 seconds NAMES STATUS policy-clamp-runtime-acm Up 20 seconds policy-apex-pdp Up 21 seconds policy-pap Up 22 seconds policy-clamp-ac-pf-ppnt Up 23 seconds policy-api Up 25 seconds policy-clamp-ac-http-ppnt Up 27 seconds policy-clamp-ac-sim-ppnt Up 28 seconds policy-clamp-ac-k8s-ppnt Up 26 seconds kafka Up 31 seconds compose_zookeeper_1 Up 33 seconds simulator Up 31 seconds mariadb Up 32 seconds NAMES STATUS policy-clamp-runtime-acm Up 25 seconds policy-apex-pdp Up 26 seconds policy-pap Up 27 seconds policy-clamp-ac-pf-ppnt Up 28 seconds policy-api Up 30 seconds policy-clamp-ac-http-ppnt Up 32 seconds policy-clamp-ac-sim-ppnt Up 33 seconds policy-clamp-ac-k8s-ppnt Up 31 seconds kafka Up 36 seconds compose_zookeeper_1 Up 38 seconds simulator Up 36 seconds mariadb Up 37 seconds NAMES STATUS policy-clamp-runtime-acm Up 30 seconds policy-apex-pdp Up 31 seconds policy-pap Up 32 seconds policy-clamp-ac-pf-ppnt Up 33 seconds policy-api Up 35 seconds policy-clamp-ac-http-ppnt Up 37 seconds policy-clamp-ac-sim-ppnt Up 38 seconds policy-clamp-ac-k8s-ppnt Up 36 seconds kafka Up 41 seconds compose_zookeeper_1 Up 43 seconds simulator Up 41 seconds mariadb Up 42 seconds NAMES STATUS policy-clamp-runtime-acm Up 35 seconds policy-apex-pdp Up 36 seconds policy-pap Up 37 seconds policy-clamp-ac-pf-ppnt Up 38 seconds policy-api Up 40 seconds policy-clamp-ac-http-ppnt Up 42 seconds policy-clamp-ac-sim-ppnt Up 43 seconds policy-clamp-ac-k8s-ppnt Up 41 seconds kafka Up 46 seconds compose_zookeeper_1 Up 49 seconds simulator Up 46 seconds mariadb Up 48 seconds NAMES STATUS policy-clamp-runtime-acm Up 40 seconds policy-apex-pdp Up 41 seconds policy-pap Up 42 seconds policy-clamp-ac-pf-ppnt Up 43 seconds policy-api Up 45 seconds policy-clamp-ac-http-ppnt Up 47 seconds policy-clamp-ac-sim-ppnt Up 48 seconds policy-clamp-ac-k8s-ppnt Up 46 seconds kafka Up 51 seconds compose_zookeeper_1 Up 54 seconds simulator Up 52 seconds mariadb Up 53 seconds NAMES STATUS policy-clamp-runtime-acm Up 45 seconds policy-apex-pdp Up 46 seconds policy-pap Up 47 seconds policy-clamp-ac-pf-ppnt Up 49 seconds policy-api Up 50 seconds policy-clamp-ac-http-ppnt Up 52 seconds policy-clamp-ac-sim-ppnt Up 53 seconds policy-clamp-ac-k8s-ppnt Up 51 seconds kafka Up 56 seconds compose_zookeeper_1 Up 59 seconds simulator Up 57 seconds mariadb Up 58 seconds NAMES STATUS policy-clamp-runtime-acm Up 50 seconds policy-apex-pdp Up 51 seconds policy-pap Up 52 seconds policy-clamp-ac-pf-ppnt Up 54 seconds policy-api Up 55 seconds policy-clamp-ac-http-ppnt Up 57 seconds policy-clamp-ac-sim-ppnt Up 58 seconds policy-clamp-ac-k8s-ppnt Up 56 seconds kafka Up About a minute compose_zookeeper_1 Up About a minute simulator Up About a minute mariadb Up About a minute NAMES STATUS policy-clamp-runtime-acm Up 55 seconds policy-apex-pdp Up 56 seconds policy-pap Up 57 seconds policy-clamp-ac-pf-ppnt Up 59 seconds policy-api Up About a minute policy-clamp-ac-http-ppnt Up About a minute policy-clamp-ac-sim-ppnt Up About a minute policy-clamp-ac-k8s-ppnt Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute simulator Up About a minute mariadb Up About a minute ++ CLAMP_K8S_TEST=false ++ export SUITES=policy-clamp-test.robot ++ SUITES=policy-clamp-test.robot ++ ROBOT_VARIABLES='-v POLICY_RUNTIME_ACM_IP:localhost:30007 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v CLAMP_K8S_TEST:false' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-clamp-master-project-csit-clamp/csit/archives/clamp/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 15:28:26 up 5 min, 0 users, load average: 6.00, 2.64, 1.03 Tasks: 229 total, 1 running, 153 sleeping, 0 stopped, 0 zombie %Cpu(s): 19.1 us, 3.5 sy, 0.0 ni, 73.3 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 4.6G 20G 1.5M 6.5G 26G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-clamp-runtime-acm Up 56 seconds policy-apex-pdp Up 57 seconds policy-pap Up 58 seconds policy-clamp-ac-pf-ppnt Up 59 seconds policy-api Up About a minute policy-clamp-ac-http-ppnt Up About a minute policy-clamp-ac-sim-ppnt Up About a minute policy-clamp-ac-k8s-ppnt Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute simulator Up About a minute mariadb Up About a minute + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 456aee401c06 policy-clamp-runtime-acm 1.10% 620.1MiB / 31.41GiB 1.93% 16.4kB / 22.5kB 0B / 0B 61 4893c4a38115 policy-apex-pdp 1.88% 172.2MiB / 31.41GiB 0.54% 14.6kB / 15.5kB 0B / 0B 49 acc75b4c7eae policy-pap 2.34% 620.8MiB / 31.41GiB 1.93% 39kB / 44.9kB 0B / 181MB 62 b14d58b1245c policy-clamp-ac-pf-ppnt 1.96% 311.2MiB / 31.41GiB 0.97% 17.4kB / 18.2kB 0B / 0B 57 0fea5e05af34 policy-api 0.42% 500.3MiB / 31.41GiB 1.56% 1MB / 713kB 0B / 0B 53 b391e6e6a089 policy-clamp-ac-http-ppnt 0.61% 307.2MiB / 31.41GiB 0.96% 29.9kB / 34.6kB 0B / 0B 56 eb2e36db8a11 policy-clamp-ac-sim-ppnt 0.51% 305.8MiB / 31.41GiB 0.95% 27.3kB / 32.1kB 0B / 0B 58 66976738a05d policy-clamp-ac-k8s-ppnt 0.72% 308.7MiB / 31.41GiB 0.96% 22.8kB / 25.8kB 0B / 0B 58 91df149ffd2d kafka 3.13% 404.9MiB / 31.41GiB 1.26% 197kB / 181kB 0B / 549kB 83 4f772e7bfeb3 compose_zookeeper_1 0.13% 102.2MiB / 31.41GiB 0.32% 65.4kB / 56.1kB 168kB / 446kB 61 abbabc2ff8e0 simulator 0.28% 179.7MiB / 31.41GiB 0.56% 1.72kB / 0B 0B / 0B 93 8d136857f9ee mariadb 0.11% 103.8MiB / 31.41GiB 0.32% 1.02MB / 1.2MB 11MB / 71.6MB 43 + echo + cd /tmp/tmp.qcL4gSdRzk + echo 'Reading the testplan:' Reading the testplan: + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + echo policy-clamp-test.robot + sed 's|^|/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot ++ xargs + SUITES=/w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot + echo 'ROBOT_VARIABLES=-v POLICY_RUNTIME_ACM_IP:localhost:30007 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v CLAMP_K8S_TEST:false' ROBOT_VARIABLES=-v POLICY_RUNTIME_ACM_IP:localhost:30007 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v CLAMP_K8S_TEST:false + echo 'Starting Robot test suites /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot ...' Starting Robot test suites /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N clamp -v WORKSPACE:/tmp -v POLICY_RUNTIME_ACM_IP:localhost:30007 -v POLICY_API_IP:localhost:30002 -v POLICY_PAP_IP:localhost:30003 -v CLAMP_K8S_TEST:false /w/workspace/policy-clamp-master-project-csit-clamp/csit/resources/tests/policy-clamp-test.robot ============================================================================== clamp ============================================================================== Healthcheck :: Healthcheck on Clamp Acm | PASS | ------------------------------------------------------------------------------ CommissionAutomationComposition :: Commission automation composition. | PASS | ------------------------------------------------------------------------------ RegisterParticipants :: Register Participants. | PASS | ------------------------------------------------------------------------------ PrimeACDefinitions :: Prime automation composition definition | FAIL | HTTPError: 400 Client Error: for url: http://localhost:30007/onap/policy/clamp/acm/v2/compositions/46f66bfb-4746-4a5e-be64-7f61ac30302d ------------------------------------------------------------------------------ InstantiateAutomationComposition :: Instantiate automation composi... | FAIL | HTTPError: 400 Client Error: for url: http://localhost:30007/onap/policy/clamp/acm/v2/compositions/46f66bfb-4746-4a5e-be64-7f61ac30302d/instances ------------------------------------------------------------------------------ DeployAutomationComposition :: Deploy automation composition. | FAIL | Variable '${instanceId}' not found. ------------------------------------------------------------------------------ QueryPolicies :: Verify the new policies deployed | FAIL | HTTPError: 404 Client Error: for url: http://localhost:30003/policy/pap/v1/policies/deployed ------------------------------------------------------------------------------ QueryPolicyTypes :: Verify the new policy types created | PASS | ------------------------------------------------------------------------------ UnDeployAutomationComposition :: UnDeploy automation composition. | FAIL | Variable '${instanceId}' not found. ------------------------------------------------------------------------------ UnInstantiateAutomationComposition :: Delete automation compositio... | FAIL | Variable '${instanceId}' not found. ------------------------------------------------------------------------------ DePrimeACDefinitions :: DePrime automation composition definition | PASS | ------------------------------------------------------------------------------ DeleteACDefinition :: Delete automation composition definition. | PASS | ------------------------------------------------------------------------------ clamp | FAIL | 12 tests, 6 passed, 6 failed ============================================================================== Output: /tmp/tmp.qcL4gSdRzk/output.xml Log: /tmp/tmp.qcL4gSdRzk/log.html Report: /tmp/tmp.qcL4gSdRzk/report.html + RESULT=6 + load_set + _setopts=hxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ sed 's/./& /g' ++ echo hxB + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 6' RESULT: 6 + exit 6 + on_exit + rc=6 + [[ -n /w/workspace/policy-clamp-master-project-csit-clamp ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-clamp-runtime-acm Up About a minute policy-apex-pdp Up About a minute policy-pap Up About a minute policy-clamp-ac-pf-ppnt Up About a minute policy-api Up About a minute policy-clamp-ac-http-ppnt Up About a minute policy-clamp-ac-sim-ppnt Up About a minute policy-clamp-ac-k8s-ppnt Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute simulator Up About a minute mariadb Up About a minute + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 15:28:45 up 5 min, 0 users, load average: 5.08, 2.61, 1.05 Tasks: 223 total, 1 running, 151 sleeping, 0 stopped, 0 zombie %Cpu(s): 19.0 us, 3.4 sy, 0.0 ni, 73.5 id, 3.9 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 5.0G 19G 1.5M 6.5G 25G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-clamp-runtime-acm Up About a minute policy-apex-pdp Up About a minute policy-pap Up About a minute policy-clamp-ac-pf-ppnt Up About a minute policy-api Up About a minute policy-clamp-ac-http-ppnt Up About a minute policy-clamp-ac-sim-ppnt Up About a minute policy-clamp-ac-k8s-ppnt Up About a minute kafka Up About a minute compose_zookeeper_1 Up About a minute simulator Up About a minute mariadb Up About a minute + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 456aee401c06 policy-clamp-runtime-acm 1.67% 445.6MiB / 31.41GiB 1.39% 2.99MB / 398kB 0B / 0B 64 4893c4a38115 policy-apex-pdp 0.92% 174.8MiB / 31.41GiB 0.54% 29.1kB / 32.8kB 0B / 0B 50 acc75b4c7eae policy-pap 1.12% 633.5MiB / 31.41GiB 1.97% 135kB / 110kB 0B / 181MB 63 b14d58b1245c policy-clamp-ac-pf-ppnt 0.84% 311.7MiB / 31.41GiB 0.97% 32.4kB / 33.8kB 0B / 0B 58 0fea5e05af34 policy-api 0.13% 943.8MiB / 31.41GiB 2.93% 1.45MB / 871kB 0B / 0B 53 b391e6e6a089 policy-clamp-ac-http-ppnt 0.82% 307.6MiB / 31.41GiB 0.96% 45.5kB / 50.9kB 0B / 0B 57 eb2e36db8a11 policy-clamp-ac-sim-ppnt 1.95% 306.4MiB / 31.41GiB 0.95% 42.9kB / 48.5kB 0B / 0B 59 66976738a05d policy-clamp-ac-k8s-ppnt 1.26% 311.7MiB / 31.41GiB 0.97% 38.8kB / 42.5kB 0B / 0B 59 91df149ffd2d kafka 2.99% 409MiB / 31.41GiB 1.27% 322kB / 302kB 0B / 565kB 83 4f772e7bfeb3 compose_zookeeper_1 0.10% 102.2MiB / 31.41GiB 0.32% 66kB / 56.4kB 168kB / 446kB 61 abbabc2ff8e0 simulator 0.27% 179.8MiB / 31.41GiB 0.56% 1.8kB / 0B 0B / 0B 93 8d136857f9ee mariadb 0.04% 105.4MiB / 31.41GiB 0.33% 1.45MB / 4.61MB 11MB / 71.8MB 40 + echo + source_safely /w/workspace/policy-clamp-master-project-csit-clamp/compose/stop-compose.sh + '[' -z /w/workspace/policy-clamp-master-project-csit-clamp/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-clamp-master-project-csit-clamp/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-clamp-master-project-csit-clamp ']' ++ COMPOSE_FOLDER=/w/workspace/policy-clamp-master-project-csit-clamp/compose ++ cd /w/workspace/policy-clamp-master-project-csit-clamp/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-clamp-runtime-acm, policy-apex-pdp, policy-pap, policy-clamp-ac-pf-ppnt, policy-api, policy-clamp-ac-http-ppnt, policy-clamp-ac-sim-ppnt, policy-clamp-ac-k8s-ppnt, policy-db-migrator, kafka, compose_zookeeper_1, simulator, mariadb policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.5:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.12:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-01-15T15:28:12.621+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-01-15T15:28:12.775+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = a66a1343-5c6d-4c55-860d-3d5ddcaf6219 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-01-15T15:28:12.914+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-15T15:28:12.914+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-15T15:28:12.914+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332492912 policy-apex-pdp | [2024-01-15T15:28:12.916+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-1, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-15T15:28:12.929+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-01-15T15:28:12.929+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-01-15T15:28:12.934+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a66a1343-5c6d-4c55-860d-3d5ddcaf6219, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-01-15T15:28:12.954+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2 policy-apex-pdp | client.rack = zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-01-15 15:27:21,250] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,284] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,284] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,284] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,284] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,286] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-15 15:27:21,286] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-15 15:27:21,287] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-15 15:27:21,287] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-01-15 15:27:21,288] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-01-15 15:27:21,288] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,289] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,289] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,289] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,289] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-15 15:27:21,290] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-01-15 15:27:21,306] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@55b53d44 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-01-15 15:27:21,309] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-15 15:27:21,309] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-15 15:27:21,312] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-15 15:27:21,321] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,321] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,321] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = a66a1343-5c6d-4c55-860d-3d5ddcaf6219 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope zookeeper_1 | [2024-01-15 15:27:21,322] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,322] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,322] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,322] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,322] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,322] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,322] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,323] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,323] INFO Server environment:host.name=4f772e7bfeb3 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,323] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-01-15T15:28:12.962+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-15T15:28:12.963+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-15T15:28:12.963+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332492962 policy-apex-pdp | [2024-01-15T15:28:12.963+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-15T15:28:12.964+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4c4c8940-22f4-4ba0-b711-56ce92759977, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-01-15T15:28:12.975+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-01-15 15:27:23,912] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,912] INFO Client environment:host.name=91df149ffd2d (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,912] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,912] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,913] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,916] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:23,919] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-15 15:27:23,924] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-15 15:27:23,931] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:23,945] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:23,946] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:23,958] INFO Socket connection established, initiating session, client: /172.17.0.5:59520, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:23,990] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003f89f0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:24,110] INFO Session: 0x1000003f89f0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:24,110] INFO EventThread shut down for session: 0x1000003f89f0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,324] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,325] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,325] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,325] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,325] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,325] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,325] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,325] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,325] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,326] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,326] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,326] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,326] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,327] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-01-15 15:27:21,328] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,328] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,329] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-15 15:27:21,329] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-15 15:27:21,330] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-15 15:27:21,330] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-15 15:27:21,330] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-15 15:27:21,330] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-15 15:27:21,330] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-15 15:27:21,330] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-15 15:27:21,332] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,333] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-01-15 15:27:18+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-15 15:27:18+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-01-15 15:27:18+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-15 15:27:18+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-01-15 15:27:19 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-15 15:27:19 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-15 15:27:19 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-01-15 15:27:23+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-01-15 15:27:23+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-01-15 15:27:23+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-01-15 15:27:23 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 100 ... policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-15 15:27:23 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-15 15:27:23 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-15 15:27:23 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-15 15:27:23 0 [Note] InnoDB: log sequence number 46456; transaction id 14 mariadb | 2024-01-15 15:27:23 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-15 15:27:23 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-15 15:27:23 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-15 15:27:23 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-15 15:27:23 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-01-15 15:27:24+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-01-15 15:27:26+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-01-15 15:27:26+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-01-15 15:27:26+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-01-15 15:27:26+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-01-15T15:28:12.991+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-01-15T15:28:13.005+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-15T15:28:13.006+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-15T15:28:13.006+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332493005 policy-apex-pdp | [2024-01-15T15:28:13.006+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4c4c8940-22f4-4ba0-b711-56ce92759977, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-01-15T15:28:13.007+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-01-15T15:28:13.007+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-01-15T15:28:13.009+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-01-15T15:28:13.009+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-01-15T15:28:13.011+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-01-15T15:28:13.011+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-01-15T15:28:13.011+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-01-15T15:28:13.012+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a66a1343-5c6d-4c55-860d-3d5ddcaf6219, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 policy-apex-pdp | [2024-01-15T15:28:13.012+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a66a1343-5c6d-4c55-860d-3d5ddcaf6219, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-01-15T15:28:13.012+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-01-15T15:28:13.031+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-01-15T15:28:13.033+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d18cb474-e62f-4742-a150-6ab751de8aad","timestampMs":1705332493012,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-15T15:28:13.183+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-01-15T15:28:13.183+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-01-15T15:28:13.183+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-01-15T15:28:13.183+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-01-15T15:28:13.204+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-01-15T15:28:13.204+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-01-15T15:28:13.204+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-01-15 15:27:27+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-01-15 15:27:27 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Buffer pool(s) dump completed at 240115 15:27:27 mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Shutdown completed; log sequence number 327965; transaction id 298 mariadb | 2024-01-15 15:27:27 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-01-15 15:27:27+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-01-15 15:27:27+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-01-15 15:27:27 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-15 15:27:27 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-15 15:27:27 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-15 15:27:27 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-15 15:27:27 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-15 15:27:28 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-15 15:27:28 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-15 15:27:28 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-15 15:27:28 0 [Note] InnoDB: log sequence number 327965; transaction id 299 mariadb | 2024-01-15 15:27:28 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-01-15 15:27:28 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-15 15:27:28 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-15 15:27:28 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-apex-pdp | [2024-01-15T15:28:13.205+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-01-15T15:28:13.330+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-apex-pdp | [2024-01-15T15:28:13.330+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-apex-pdp | [2024-01-15T15:28:13.331+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 6 with epoch 0 policy-apex-pdp | [2024-01-15T15:28:13.332+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-01-15T15:28:13.337+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] (Re-)joining group policy-apex-pdp | [2024-01-15T15:28:13.351+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Request joining group due to: need to re-join with the given member-id: consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2-dbebc620-f1dc-45e0-b152-ec6429c1646b policy-apex-pdp | [2024-01-15T15:28:13.351+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-01-15T15:28:13.351+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] (Re-)joining group policy-apex-pdp | [2024-01-15T15:28:13.792+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-01-15T15:28:13.794+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-01-15T15:28:16.355+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Successfully joined group with generation Generation{generationId=1, memberId='consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2-dbebc620-f1dc-45e0-b152-ec6429c1646b', protocol='range'} policy-apex-pdp | [2024-01-15T15:28:16.362+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Finished assignment for group at generation 1: {consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2-dbebc620-f1dc-45e0-b152-ec6429c1646b=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-01-15T15:28:16.370+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Successfully synced group in generation Generation{generationId=1, memberId='consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2-dbebc620-f1dc-45e0-b152-ec6429c1646b', protocol='range'} policy-apex-pdp | [2024-01-15T15:28:16.371+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-01-15T15:28:16.374+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-01-15T15:28:16.382+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-01-15T15:28:16.393+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2, groupId=a66a1343-5c6d-4c55-860d-3d5ddcaf6219] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-01-15T15:28:33.011+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"94e19f88-e655-41ee-a7e4-15c2eb23ed16","timestampMs":1705332513011,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-15T15:28:33.028+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"94e19f88-e655-41ee-a7e4-15c2eb23ed16","timestampMs":1705332513011,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-15T15:28:33.031+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-15T15:28:33.214+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"3a4bff02-b148-4906-89e0-3452ec19425b","timestampMs":1705332513156,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.226+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-01-15T15:28:33.226+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"b9d06743-04f4-4bbd-9c7d-b4492be21465","timestampMs":1705332513226,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-15T15:28:33.227+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"3a4bff02-b148-4906-89e0-3452ec19425b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"8e34aa81-cc44-41ed-9fa1-c9758c7eb343","timestampMs":1705332513227,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.236+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"3a4bff02-b148-4906-89e0-3452ec19425b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"8e34aa81-cc44-41ed-9fa1-c9758c7eb343","timestampMs":1705332513227,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.236+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-15T15:28:33.241+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"b9d06743-04f4-4bbd-9c7d-b4492be21465","timestampMs":1705332513226,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-15T15:28:33.241+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-15T15:28:33.254+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9dc3ba67-47f7-4531-935c-8f19ab3aa9d2","timestampMs":1705332513157,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.257+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9dc3ba67-47f7-4531-935c-8f19ab3aa9d2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"0b62d888-5f9f-45ef-a8c4-5186ef39c42f","timestampMs":1705332513257,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.266+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9dc3ba67-47f7-4531-935c-8f19ab3aa9d2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"0b62d888-5f9f-45ef-a8c4-5186ef39c42f","timestampMs":1705332513257,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.266+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-15T15:28:33.346+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"186de137-5a34-429d-99f9-1d40097069e1","timestampMs":1705332513289,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.347+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"186de137-5a34-429d-99f9-1d40097069e1","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae00df6d-b992-4fdd-9c88-e5121e0a590c","timestampMs":1705332513347,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.356+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"186de137-5a34-429d-99f9-1d40097069e1","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae00df6d-b992-4fdd-9c88-e5121e0a590c","timestampMs":1705332513347,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-15T15:28:33.356+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS zookeeper_1 | [2024-01-15 15:27:21,333] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-15 15:27:21,333] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-15 15:27:21,333] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,353] INFO Logging initialized @641ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-01-15 15:27:21,467] WARN o.e.j.s.ServletContextHandler@49c90a9c{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-15 15:27:21,467] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-15 15:27:21,511] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-15 15:27:21,549] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-15 15:27:21,549] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-15 15:27:21,550] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-15 15:27:21,554] WARN ServletContext@o.e.j.s.ServletContextHandler@49c90a9c{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-01-15 15:27:21,562] INFO Started o.e.j.s.ServletContextHandler@49c90a9c{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-15 15:27:21,574] INFO Started ServerConnector@723ca036{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-01-15 15:27:21,574] INFO Started @863ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-15 15:27:21,574] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-01-15 15:27:21,578] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-15 15:27:21,578] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-15 15:27:21,580] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-15 15:27:21,581] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-15 15:27:21,601] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-15 15:27:21,601] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-15 15:27:21,603] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-15 15:27:21,603] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-15 15:27:21,654] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-01-15 15:27:21,654] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-15 15:27:21,663] INFO Snapshot loaded in 60 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-15 15:27:21,664] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-15 15:27:21,664] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-15 15:27:21,676] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-01-15 15:27:21,677] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-01-15 15:27:21,697] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-01-15 15:27:21,701] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-01-15 15:27:23,974] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.4) policy-api | policy-api | [2024-01-15T15:27:39.867+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-01-15T15:27:39.868+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-01-15T15:27:45.329+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-01-15T15:27:45.711+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 331 ms. Found 6 JPA repository interfaces. policy-api | [2024-01-15T15:27:46.907+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-15T15:27:46.908+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-15T15:27:48.056+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-01-15T15:27:48.066+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-15T15:27:48.069+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-01-15T15:27:48.069+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-api | [2024-01-15T15:27:48.166+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-01-15T15:27:48.167+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 8088 ms policy-api | [2024-01-15T15:27:48.649+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-01-15T15:27:48.725+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-01-15T15:27:48.729+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-01-15T15:27:48.778+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled kafka | [2024-01-15 15:27:24,759] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-01-15 15:27:25,140] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-15 15:27:25,209] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-01-15 15:27:25,210] INFO starting (kafka.server.KafkaServer) kafka | [2024-01-15 15:27:25,210] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-01-15 15:27:25,223] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-15 15:27:25,227] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:host.name=91df149ffd2d (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) policy-clamp-ac-http-ppnt | Waiting for kafka port 9092... policy-clamp-ac-http-ppnt | kafka (172.17.0.5:9092) open policy-clamp-ac-http-ppnt | Policy clamp HTTP participant config file: /opt/app/policy/clamp/etc/HttpParticipantParameters.yaml policy-clamp-ac-http-ppnt | policy-clamp-ac-http-ppnt | . ____ _ __ _ _ policy-clamp-ac-http-ppnt | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-clamp-ac-http-ppnt | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-clamp-ac-http-ppnt | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-clamp-ac-http-ppnt | ' |____| .__|_| |_|_| |_\__, | / / / / policy-clamp-ac-http-ppnt | =========|_|==============|___/=/_/_/_/ policy-clamp-ac-http-ppnt | :: Spring Boot :: (v3.1.4) policy-clamp-ac-http-ppnt | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:29.721+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-clamp-ac-http-ppnt | [2024-01-15T15:27:29.855+00:00|INFO|Application|main] Starting Application using Java 17.0.9 with PID 12 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:29.856+00:00|INFO|Application|main] No active profile set, falling back to 1 default profile: "default" policy-clamp-ac-http-ppnt | [2024-01-15T15:27:35.935+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:35.950+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-clamp-ac-http-ppnt | [2024-01-15T15:27:35.952+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-clamp-ac-http-ppnt | [2024-01-15T15:27:35.952+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-clamp-ac-http-ppnt | [2024-01-15T15:27:36.254+00:00|INFO|[/onap/policy/clamp/acm/httpparticipant]|main] Initializing Spring embedded WebApplicationContext policy-clamp-ac-http-ppnt | [2024-01-15T15:27:36.254+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 6133 ms policy-clamp-ac-http-ppnt | [2024-01-15T15:27:38.320+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-ac-http-ppnt | allow.auto.create.topics = true policy-clamp-ac-http-ppnt | auto.commit.interval.ms = 5000 policy-clamp-ac-http-ppnt | auto.include.jmx.reporter = true policy-clamp-ac-http-ppnt | auto.offset.reset = latest policy-clamp-ac-http-ppnt | bootstrap.servers = [kafka:9092] policy-clamp-ac-http-ppnt | check.crcs = true policy-clamp-ac-http-ppnt | client.dns.lookup = use_all_dns_ips policy-clamp-ac-http-ppnt | client.id = consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-1 policy-clamp-ac-http-ppnt | client.rack = policy-clamp-ac-http-ppnt | connections.max.idle.ms = 540000 policy-clamp-ac-http-ppnt | default.api.timeout.ms = 60000 policy-clamp-ac-http-ppnt | enable.auto.commit = true policy-clamp-ac-http-ppnt | exclude.internal.topics = true policy-clamp-ac-http-ppnt | fetch.max.bytes = 52428800 policy-clamp-ac-http-ppnt | fetch.max.wait.ms = 500 policy-clamp-ac-http-ppnt | fetch.min.bytes = 1 policy-clamp-ac-http-ppnt | group.id = 75c1079f-283a-4d21-9ddf-3c97158a5ec8 policy-clamp-ac-http-ppnt | group.instance.id = null policy-clamp-ac-http-ppnt | heartbeat.interval.ms = 3000 policy-clamp-ac-http-ppnt | interceptor.classes = [] policy-clamp-ac-http-ppnt | internal.leave.group.on.close = true policy-clamp-ac-http-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false policy-clamp-ac-http-ppnt | isolation.level = read_uncommitted policy-clamp-ac-http-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-http-ppnt | max.partition.fetch.bytes = 1048576 policy-clamp-ac-http-ppnt | max.poll.interval.ms = 300000 policy-clamp-ac-http-ppnt | max.poll.records = 500 policy-clamp-ac-http-ppnt | metadata.max.age.ms = 300000 policy-clamp-ac-http-ppnt | metric.reporters = [] policy-api | [2024-01-15T15:27:49.133+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-01-15T15:27:49.157+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-01-15T15:27:49.267+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7636823f policy-api | [2024-01-15T15:27:49.270+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-01-15T15:27:49.303+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-api | [2024-01-15T15:27:49.306+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-api | [2024-01-15T15:27:51.592+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-01-15T15:27:51.596+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-01-15T15:27:52.952+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-01-15T15:27:53.817+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-01-15T15:27:55.019+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-01-15T15:27:55.236+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@fb74661, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4b7feb38, org.springframework.security.web.context.SecurityContextHolderFilter@25e7e6d, org.springframework.security.web.header.HeaderWriterFilter@66161fee, org.springframework.security.web.authentication.logout.LogoutFilter@607c7f58, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@729f8c5d, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@53d257e7, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@56b5de49, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6e04275e, org.springframework.security.web.access.ExceptionTranslationFilter@56bc8c45, org.springframework.security.web.access.intercept.AuthorizationFilter@31829b82] policy-api | [2024-01-15T15:27:56.224+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-01-15T15:27:56.282+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-15T15:27:56.302+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-01-15T15:27:56.324+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 18.401 seconds (process running for 20.175) policy-api | [2024-01-15T15:28:41.904+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-01-15T15:28:41.904+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' policy-api | [2024-01-15T15:28:41.906+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 2 ms policy-clamp-ac-http-ppnt | metrics.num.samples = 2 policy-clamp-ac-http-ppnt | metrics.recording.level = INFO policy-clamp-ac-http-ppnt | metrics.sample.window.ms = 30000 policy-clamp-ac-http-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-clamp-ac-http-ppnt | receive.buffer.bytes = 65536 policy-clamp-ac-http-ppnt | reconnect.backoff.max.ms = 1000 policy-clamp-ac-http-ppnt | reconnect.backoff.ms = 50 policy-clamp-ac-http-ppnt | request.timeout.ms = 30000 policy-clamp-ac-http-ppnt | retry.backoff.ms = 100 policy-clamp-ac-http-ppnt | sasl.client.callback.handler.class = null policy-clamp-ac-http-ppnt | sasl.jaas.config = null policy-clamp-ac-http-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-ac-http-ppnt | sasl.kerberos.min.time.before.relogin = 60000 policy-clamp-ac-http-ppnt | sasl.kerberos.service.name = null policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-clamp-ac-http-ppnt | sasl.login.callback.handler.class = null policy-clamp-ac-http-ppnt | sasl.login.class = null policy-clamp-ac-http-ppnt | sasl.login.connect.timeout.ms = null policy-clamp-ac-http-ppnt | sasl.login.read.timeout.ms = null policy-clamp-ac-http-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-clamp-ac-http-ppnt | sasl.login.refresh.min.period.seconds = 60 policy-clamp-ac-http-ppnt | sasl.login.refresh.window.factor = 0.8 policy-clamp-ac-http-ppnt | sasl.login.refresh.window.jitter = 0.05 policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.max.ms = 10000 policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.ms = 100 policy-clamp-ac-http-ppnt | sasl.mechanism = GSSAPI policy-clamp-ac-http-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.audience = null policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.issuer = null policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-ac-http-ppnt | sasl.oauthbearer.scope.claim.name = scope policy-clamp-ac-http-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-clamp-ac-http-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-clamp-ac-http-ppnt | security.protocol = PLAINTEXT policy-clamp-ac-http-ppnt | security.providers = null policy-clamp-ac-http-ppnt | send.buffer.bytes = 131072 policy-clamp-ac-http-ppnt | session.timeout.ms = 45000 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.ms = 10000 policy-clamp-ac-http-ppnt | ssl.cipher.suites = null policy-clamp-ac-http-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-ac-http-ppnt | ssl.endpoint.identification.algorithm = https policy-clamp-ac-http-ppnt | ssl.engine.factory.class = null policy-clamp-ac-http-ppnt | ssl.key.password = null policy-clamp-ac-http-ppnt | ssl.keymanager.algorithm = SunX509 policy-clamp-ac-http-ppnt | ssl.keystore.certificate.chain = null policy-clamp-ac-http-ppnt | ssl.keystore.key = null policy-clamp-ac-http-ppnt | ssl.keystore.location = null policy-clamp-ac-http-ppnt | ssl.keystore.password = null policy-clamp-ac-http-ppnt | ssl.keystore.type = JKS policy-clamp-ac-http-ppnt | ssl.protocol = TLSv1.3 policy-clamp-ac-http-ppnt | ssl.provider = null policy-clamp-ac-http-ppnt | ssl.secure.random.implementation = null policy-clamp-ac-http-ppnt | ssl.trustmanager.algorithm = PKIX policy-clamp-ac-http-ppnt | ssl.truststore.certificates = null policy-clamp-ac-http-ppnt | ssl.truststore.location = null policy-clamp-ac-http-ppnt | ssl.truststore.password = null policy-clamp-ac-http-ppnt | ssl.truststore.type = JKS policy-clamp-ac-http-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-http-ppnt | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:38.652+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-clamp-ac-http-ppnt | [2024-01-15T15:27:38.652+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-clamp-ac-http-ppnt | [2024-01-15T15:27:38.652+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332458650 policy-clamp-ac-http-ppnt | [2024-01-15T15:27:38.655+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-1, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Subscribed to topic(s): policy-acruntime-participant policy-clamp-ac-http-ppnt | [2024-01-15T15:27:38.677+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.participant.http.config.MicrometerConfig policy-clamp-ac-pf-ppnt | kafka (172.17.0.5:9092) open policy-clamp-ac-pf-ppnt | Waiting for kafka port 9092... policy-clamp-ac-pf-ppnt | Waiting for api port 6969... policy-clamp-ac-pf-ppnt | api (172.17.0.10:6969) open policy-clamp-ac-pf-ppnt | Policy clamp policy participant config file: /opt/app/policy/clamp/etc/PolicyParticipantParameters.yaml policy-clamp-ac-pf-ppnt | policy-clamp-ac-pf-ppnt | . ____ _ __ _ _ policy-clamp-ac-pf-ppnt | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-clamp-ac-pf-ppnt | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-clamp-ac-pf-ppnt | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-clamp-ac-pf-ppnt | ' |____| .__|_| |_|_| |_\__, | / / / / policy-clamp-ac-pf-ppnt | =========|_|==============|___/=/_/_/_/ policy-clamp-ac-pf-ppnt | :: Spring Boot :: (v3.1.4) policy-clamp-ac-pf-ppnt | policy-clamp-ac-pf-ppnt | [2024-01-15T15:27:58.270+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-clamp-ac-pf-ppnt | [2024-01-15T15:27:58.405+00:00|INFO|PolicyParticipantApplication|main] Starting PolicyParticipantApplication using Java 17.0.9 with PID 39 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) policy-clamp-ac-pf-ppnt | [2024-01-15T15:27:58.406+00:00|INFO|PolicyParticipantApplication|main] No active profile set, falling back to 1 default profile: "default" policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:02.071+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:02.083+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:02.085+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:02.086+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:02.214+00:00|INFO|[/onap/policy/clamp/acm/policyparticipant]|main] Initializing Spring embedded WebApplicationContext policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:02.215+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3619 ms policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:03.226+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-ac-pf-ppnt | allow.auto.create.topics = true policy-clamp-ac-pf-ppnt | auto.commit.interval.ms = 5000 policy-clamp-ac-pf-ppnt | auto.include.jmx.reporter = true policy-clamp-ac-pf-ppnt | auto.offset.reset = latest policy-clamp-ac-pf-ppnt | bootstrap.servers = [kafka:9092] policy-clamp-ac-pf-ppnt | check.crcs = true policy-clamp-ac-pf-ppnt | client.dns.lookup = use_all_dns_ips policy-clamp-ac-pf-ppnt | client.id = consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-1 policy-clamp-ac-pf-ppnt | client.rack = policy-clamp-ac-pf-ppnt | connections.max.idle.ms = 540000 policy-clamp-ac-pf-ppnt | default.api.timeout.ms = 60000 policy-clamp-ac-pf-ppnt | enable.auto.commit = true policy-clamp-ac-pf-ppnt | exclude.internal.topics = true policy-clamp-ac-pf-ppnt | fetch.max.bytes = 52428800 policy-clamp-ac-pf-ppnt | fetch.max.wait.ms = 500 policy-clamp-ac-pf-ppnt | fetch.min.bytes = 1 policy-clamp-ac-pf-ppnt | group.id = cbd2f33b-08da-4df5-9be7-9783ed68c1a9 policy-clamp-ac-pf-ppnt | group.instance.id = null policy-clamp-ac-pf-ppnt | heartbeat.interval.ms = 3000 policy-clamp-ac-pf-ppnt | interceptor.classes = [] policy-clamp-ac-pf-ppnt | internal.leave.group.on.close = true policy-clamp-ac-pf-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false policy-clamp-ac-pf-ppnt | isolation.level = read_uncommitted policy-clamp-ac-pf-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-pf-ppnt | max.partition.fetch.bytes = 1048576 policy-clamp-ac-pf-ppnt | max.poll.interval.ms = 300000 policy-clamp-ac-pf-ppnt | max.poll.records = 500 policy-clamp-ac-pf-ppnt | metadata.max.age.ms = 300000 policy-clamp-ac-pf-ppnt | metric.reporters = [] policy-clamp-ac-pf-ppnt | metrics.num.samples = 2 policy-clamp-ac-pf-ppnt | metrics.recording.level = INFO policy-clamp-ac-pf-ppnt | metrics.sample.window.ms = 30000 policy-clamp-ac-pf-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-clamp-ac-pf-ppnt | receive.buffer.bytes = 65536 policy-clamp-ac-pf-ppnt | reconnect.backoff.max.ms = 1000 policy-clamp-ac-pf-ppnt | reconnect.backoff.ms = 50 policy-clamp-ac-pf-ppnt | request.timeout.ms = 30000 mariadb | 2024-01-15 15:27:28 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-01-15 15:27:28 0 [Note] Server socket created on IP: '::'. mariadb | 2024-01-15 15:27:28 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-01-15 15:27:28 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-01-15 15:27:28 0 [Note] InnoDB: Buffer pool(s) load completed at 240115 15:27:28 mariadb | 2024-01-15 15:27:28 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-01-15 15:27:28 11 [Warning] Aborted connection 11 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.12' (This connection closed normally without authentication) mariadb | 2024-01-15 15:27:29 62 [Warning] Aborted connection 62 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.13' (This connection closed normally without authentication) mariadb | 2024-01-15 15:27:30 108 [Warning] Aborted connection 108 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.14' (This connection closed normally without authentication) policy-clamp-ac-pf-ppnt | retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | sasl.client.callback.handler.class = null policy-clamp-ac-pf-ppnt | sasl.jaas.config = null policy-clamp-ac-pf-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-ac-pf-ppnt | sasl.kerberos.min.time.before.relogin = 60000 policy-clamp-ac-pf-ppnt | sasl.kerberos.service.name = null policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-clamp-ac-pf-ppnt | sasl.login.callback.handler.class = null policy-clamp-ac-pf-ppnt | sasl.login.class = null policy-clamp-ac-pf-ppnt | sasl.login.connect.timeout.ms = null policy-clamp-ac-pf-ppnt | sasl.login.read.timeout.ms = null policy-clamp-ac-pf-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-clamp-ac-pf-ppnt | sasl.login.refresh.min.period.seconds = 60 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.factor = 0.8 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.jitter = 0.05 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.max.ms = 10000 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | sasl.mechanism = GSSAPI policy-clamp-ac-pf-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.audience = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.issuer = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.scope.claim.name = scope policy-clamp-ac-pf-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-clamp-ac-pf-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-clamp-ac-pf-ppnt | security.protocol = PLAINTEXT policy-clamp-ac-pf-ppnt | security.providers = null policy-clamp-ac-pf-ppnt | send.buffer.bytes = 131072 policy-clamp-ac-pf-ppnt | session.timeout.ms = 45000 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.ms = 10000 policy-clamp-ac-pf-ppnt | ssl.cipher.suites = null policy-clamp-ac-pf-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-ac-pf-ppnt | ssl.endpoint.identification.algorithm = https policy-clamp-ac-pf-ppnt | ssl.engine.factory.class = null policy-clamp-ac-pf-ppnt | ssl.key.password = null policy-clamp-ac-pf-ppnt | ssl.keymanager.algorithm = SunX509 policy-clamp-ac-pf-ppnt | ssl.keystore.certificate.chain = null policy-clamp-ac-pf-ppnt | ssl.keystore.key = null policy-clamp-ac-pf-ppnt | ssl.keystore.location = null policy-clamp-ac-pf-ppnt | ssl.keystore.password = null policy-clamp-ac-pf-ppnt | ssl.keystore.type = JKS policy-clamp-ac-pf-ppnt | ssl.protocol = TLSv1.3 policy-clamp-ac-pf-ppnt | ssl.provider = null policy-clamp-ac-pf-ppnt | ssl.secure.random.implementation = null policy-clamp-ac-pf-ppnt | ssl.trustmanager.algorithm = PKIX policy-clamp-ac-pf-ppnt | ssl.truststore.certificates = null policy-clamp-ac-pf-ppnt | ssl.truststore.location = null policy-clamp-ac-pf-ppnt | ssl.truststore.password = null policy-clamp-ac-pf-ppnt | ssl.truststore.type = JKS policy-clamp-ac-pf-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-pf-ppnt | policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:03.521+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:03.522+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:03.522+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332483520 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:03.524+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-1, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Subscribed to topic(s): policy-acruntime-participant policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:03.532+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.participant.policy.config.MicrometerConfig kafka | [2024-01-15 15:27:25,227] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,227] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,229] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-15 15:27:25,233] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-15 15:27:25,238] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:25,239] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-15 15:27:25,242] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:25,247] INFO Socket connection established, initiating session, client: /172.17.0.5:59522, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:25,255] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003f89f0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-15 15:27:25,259] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-15 15:27:25,737] INFO Cluster ID = f-_bmQWWQMKgLbbohjyq1w (kafka.server.KafkaServer) kafka | [2024-01-15 15:27:25,741] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-01-15 15:27:25,799] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:04.091+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@72f8ae0c, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@323f3c96, org.springframework.security.web.context.SecurityContextHolderFilter@336206d8, org.springframework.security.web.header.HeaderWriterFilter@14ef2482, org.springframework.security.web.authentication.logout.LogoutFilter@216e0771, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@472a11ae, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1f11f64e, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@858d8b4, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6726cc69, org.springframework.security.web.access.ExceptionTranslationFilter@42ea287, org.springframework.security.web.access.intercept.AuthorizationFilter@14982a82] policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.390+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '/actuator' policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.462+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.576+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/onap/policy/clamp/acm/policyparticipant' policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.600+00:00|INFO|ServiceManager|main] service manager starting policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.601+00:00|INFO|ServiceManager|main] service manager starting Topic endpoint management policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.618+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cbd2f33b-08da-4df5-9be7-9783ed68c1a9, consumerInstance=policy-clamp-ac-pf-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.643+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-ac-pf-ppnt | allow.auto.create.topics = true policy-clamp-ac-pf-ppnt | auto.commit.interval.ms = 5000 policy-clamp-ac-pf-ppnt | auto.include.jmx.reporter = true policy-clamp-ac-pf-ppnt | auto.offset.reset = latest policy-clamp-ac-pf-ppnt | bootstrap.servers = [kafka:9092] policy-clamp-ac-pf-ppnt | check.crcs = true policy-clamp-ac-pf-ppnt | client.dns.lookup = use_all_dns_ips policy-clamp-ac-pf-ppnt | client.id = consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2 policy-clamp-ac-pf-ppnt | client.rack = policy-clamp-ac-pf-ppnt | connections.max.idle.ms = 540000 policy-clamp-ac-pf-ppnt | default.api.timeout.ms = 60000 policy-clamp-ac-pf-ppnt | enable.auto.commit = true policy-clamp-ac-pf-ppnt | exclude.internal.topics = true policy-clamp-ac-pf-ppnt | fetch.max.bytes = 52428800 policy-clamp-ac-pf-ppnt | fetch.max.wait.ms = 500 policy-clamp-ac-pf-ppnt | fetch.min.bytes = 1 policy-clamp-ac-pf-ppnt | group.id = cbd2f33b-08da-4df5-9be7-9783ed68c1a9 policy-clamp-ac-pf-ppnt | group.instance.id = null policy-clamp-ac-pf-ppnt | heartbeat.interval.ms = 3000 policy-clamp-ac-pf-ppnt | interceptor.classes = [] policy-clamp-ac-pf-ppnt | internal.leave.group.on.close = true policy-clamp-ac-pf-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false policy-clamp-ac-pf-ppnt | isolation.level = read_uncommitted policy-clamp-ac-pf-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-pf-ppnt | max.partition.fetch.bytes = 1048576 policy-clamp-ac-pf-ppnt | max.poll.interval.ms = 300000 policy-clamp-ac-pf-ppnt | max.poll.records = 500 policy-clamp-ac-pf-ppnt | metadata.max.age.ms = 300000 policy-clamp-ac-pf-ppnt | metric.reporters = [] policy-clamp-ac-pf-ppnt | metrics.num.samples = 2 policy-clamp-ac-pf-ppnt | metrics.recording.level = INFO policy-clamp-ac-pf-ppnt | metrics.sample.window.ms = 30000 policy-clamp-ac-pf-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-clamp-ac-pf-ppnt | receive.buffer.bytes = 65536 policy-clamp-ac-pf-ppnt | reconnect.backoff.max.ms = 1000 policy-clamp-ac-pf-ppnt | reconnect.backoff.ms = 50 policy-clamp-ac-pf-ppnt | request.timeout.ms = 30000 policy-clamp-ac-pf-ppnt | retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | sasl.client.callback.handler.class = null policy-clamp-ac-pf-ppnt | sasl.jaas.config = null policy-clamp-ac-pf-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-ac-pf-ppnt | sasl.kerberos.min.time.before.relogin = 60000 policy-clamp-ac-pf-ppnt | sasl.kerberos.service.name = null policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-clamp-ac-pf-ppnt | sasl.login.callback.handler.class = null policy-clamp-ac-pf-ppnt | sasl.login.class = null policy-clamp-ac-pf-ppnt | sasl.login.connect.timeout.ms = null policy-clamp-ac-pf-ppnt | sasl.login.read.timeout.ms = null policy-clamp-ac-pf-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-clamp-ac-pf-ppnt | sasl.login.refresh.min.period.seconds = 60 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.factor = 0.8 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.jitter = 0.05 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.max.ms = 10000 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | sasl.mechanism = GSSAPI policy-clamp-ac-pf-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.audience = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.issuer = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.url = null kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.5-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null policy-clamp-ac-http-ppnt | [2024-01-15T15:27:39.816+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@10fda3d0, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2123064f, org.springframework.security.web.context.SecurityContextHolderFilter@11a8042c, org.springframework.security.web.header.HeaderWriterFilter@4f668f29, org.springframework.security.web.authentication.logout.LogoutFilter@17c2d509, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@38792286, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@69391e08, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@64b3b1ce, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4f6b687e, org.springframework.security.web.access.ExceptionTranslationFilter@636bbbbb, org.springframework.security.web.access.intercept.AuthorizationFilter@6a0094c9] policy-clamp-ac-pf-ppnt | sasl.oauthbearer.scope.claim.name = scope kafka | log.roll.ms = null policy-clamp-ac-k8s-ppnt | Waiting for kafka port 9092... policy-clamp-ac-http-ppnt | [2024-01-15T15:27:43.612+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '/actuator' policy-clamp-ac-pf-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-clamp-runtime-acm | Waiting for mariadb port 3306... kafka | log.segment.bytes = 1073741824 policy-clamp-ac-k8s-ppnt | kafka (172.17.0.5:9092) open policy-clamp-ac-http-ppnt | [2024-01-15T15:27:43.791+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-clamp-ac-sim-ppnt | Waiting for kafka port 9092... policy-db-migrator | Waiting for mariadb port 3306... policy-clamp-ac-pf-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-clamp-runtime-acm | mariadb (172.17.0.3:3306) open kafka | log.segment.delete.delay.ms = 60000 policy-clamp-ac-k8s-ppnt | Policy clamp Kubernetes participant config file: /opt/app/policy/clamp/etc/KubernetesParticipantParameters.yaml policy-clamp-ac-http-ppnt | [2024-01-15T15:27:43.963+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/onap/policy/clamp/acm/httpparticipant' policy-clamp-ac-sim-ppnt | kafka (172.17.0.5:9092) open policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-clamp-ac-pf-ppnt | security.protocol = PLAINTEXT simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json policy-clamp-runtime-acm | Waiting for kafka port 9092... policy-pap | Waiting for mariadb port 3306... kafka | max.connection.creation.rate = 2147483647 policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.045+00:00|INFO|ServiceManager|main] service manager starting policy-clamp-ac-k8s-ppnt | policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-clamp-ac-pf-ppnt | security.providers = null simulator | overriding logback.xml policy-clamp-runtime-acm | Waiting for policy-clamp-ac-http-ppnt port 6969... policy-pap | mariadb (172.17.0.3:3306) open kafka | max.connections = 2147483647 policy-clamp-ac-sim-ppnt | Policy clamp Simulator participant config file: /opt/app/policy/clamp/etc/SimulatorParticipantParameters.yaml policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.046+00:00|INFO|ServiceManager|main] service manager starting Topic endpoint management policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-clamp-ac-pf-ppnt | send.buffer.bytes = 131072 simulator | 2024-01-15 15:27:19,960 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-clamp-runtime-acm | kafka (172.17.0.5:9092) open policy-pap | Waiting for kafka port 9092... kafka | max.connections.per.ip = 2147483647 policy-clamp-ac-sim-ppnt | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.083+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=75c1079f-283a-4d21-9ddf-3c97158a5ec8, consumerInstance=policy-clamp-ac-http-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-clamp-ac-pf-ppnt | session.timeout.ms = 45000 simulator | 2024-01-15 15:27:20,058 INFO org.onap.policy.models.simulators starting policy-clamp-runtime-acm | policy-clamp-ac-http-ppnt (172.17.0.9:6969) open policy-pap | kafka (172.17.0.5:9092) open kafka | max.connections.per.ip.overrides = policy-clamp-ac-sim-ppnt | . ____ _ __ _ _ policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.191+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.max.ms = 30000 simulator | 2024-01-15 15:27:20,058 INFO org.onap.policy.models.simulators starting DMaaP provider policy-clamp-runtime-acm | Waiting for policy-clamp-ac-k8s-ppnt port 6969... policy-pap | Waiting for api port 6969... kafka | max.incremental.fetch.session.cache.slots = 1000 policy-clamp-ac-sim-ppnt | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-clamp-ac-http-ppnt | allow.auto.create.topics = true policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.ms = 10000 simulator | 2024-01-15 15:27:20,059 INFO service manager starting policy-clamp-runtime-acm | policy-clamp-ac-k8s-ppnt (172.17.0.7:6969) open policy-pap | api (172.17.0.10:6969) open kafka | message.max.bytes = 1048588 policy-clamp-ac-sim-ppnt | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-clamp-ac-http-ppnt | auto.commit.interval.ms = 5000 policy-clamp-ac-k8s-ppnt | . ____ _ __ _ _ policy-clamp-ac-pf-ppnt | ssl.cipher.suites = null simulator | 2024-01-15 15:27:20,059 INFO service manager starting Topic Sweeper policy-clamp-runtime-acm | Waiting for policy-clamp-ac-pf-ppnt port 6969... policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml kafka | metadata.log.dir = null policy-clamp-ac-sim-ppnt | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-clamp-ac-http-ppnt | auto.include.jmx.reporter = true policy-clamp-ac-http-ppnt | auto.offset.reset = latest policy-clamp-ac-k8s-ppnt | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-clamp-ac-pf-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] simulator | 2024-01-15 15:27:20,060 INFO service manager started policy-clamp-runtime-acm | policy-clamp-ac-pf-ppnt (172.17.0.11:6969) open policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-clamp-ac-sim-ppnt | ' |____| .__|_| |_|_| |_\__, | / / / / policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-clamp-ac-http-ppnt | bootstrap.servers = [kafka:9092] policy-clamp-ac-k8s-ppnt | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-clamp-ac-pf-ppnt | ssl.endpoint.identification.algorithm = https simulator | 2024-01-15 15:27:20,060 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties policy-clamp-runtime-acm | Waiting for apex-pdp port 6969... policy-pap | kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-clamp-ac-sim-ppnt | =========|_|==============|___/=/_/_/_/ policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-clamp-ac-http-ppnt | check.crcs = true policy-clamp-ac-k8s-ppnt | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-clamp-ac-pf-ppnt | ssl.engine.factory.class = null simulator | 2024-01-15 15:27:20,325 INFO org.onap.policy.models.simulators starting DMaaP simulator policy-clamp-runtime-acm | apex-pdp (172.17.0.13:6969) open policy-pap | . ____ _ __ _ _ kafka | metadata.log.segment.bytes = 1073741824 policy-clamp-ac-sim-ppnt | :: Spring Boot :: (v3.1.4) policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-clamp-ac-http-ppnt | client.dns.lookup = use_all_dns_ips policy-clamp-ac-k8s-ppnt | ' |____| .__|_| |_|_| |_\__, | / / / / policy-clamp-ac-pf-ppnt | ssl.key.password = null simulator | 2024-01-15 15:27:20,470 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=DMaaP simulator, host=0.0.0.0, port=3904, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@fd8294b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@5974109{/,null,STOPPED}, connector=DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-clamp-runtime-acm | Policy clamp runtime acm config file: /opt/app/policy/clamp/etc/AcRuntimeParameters.yaml policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ kafka | metadata.log.segment.min.bytes = 8388608 policy-clamp-ac-sim-ppnt | policy-db-migrator | 321 blocks policy-clamp-ac-http-ppnt | client.id = consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2 policy-clamp-ac-k8s-ppnt | =========|_|==============|___/=/_/_/_/ policy-clamp-ac-pf-ppnt | ssl.keymanager.algorithm = SunX509 simulator | 2024-01-15 15:27:20,480 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=DMaaP simulator, host=0.0.0.0, port=3904, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@fd8294b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@5974109{/,null,STOPPED}, connector=DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-clamp-runtime-acm | policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ kafka | metadata.log.segment.ms = 604800000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:30.005+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-db-migrator | Preparing upgrade release version: 0800 policy-clamp-ac-http-ppnt | client.rack = policy-clamp-ac-k8s-ppnt | :: Spring Boot :: (v3.1.4) policy-clamp-ac-pf-ppnt | ssl.keystore.certificate.chain = null simulator | 2024-01-15 15:27:20,482 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=DMaaP simulator, host=0.0.0.0, port=3904, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@fd8294b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@5974109{/,null,STOPPED}, connector=DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904}, jettyThread=Thread[DMaaP simulator-3904,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-clamp-runtime-acm | . ____ _ __ _ _ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) kafka | metadata.max.idle.interval.ms = 500 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:30.237+00:00|INFO|Application|main] Starting Application using Java 17.0.9 with PID 13 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) policy-db-migrator | Preparing upgrade release version: 0900 policy-clamp-ac-http-ppnt | connections.max.idle.ms = 540000 policy-clamp-ac-k8s-ppnt | policy-clamp-ac-pf-ppnt | ssl.keystore.key = null simulator | 2024-01-15 15:27:20,485 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-clamp-runtime-acm | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:30.238+00:00|INFO|Application|main] No active profile set, falling back to 1 default profile: "default" policy-db-migrator | Preparing upgrade release version: 1000 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:27:30.338+00:00|INFO|Application|main] Starting Application using Java 17.0.9 with PID 11 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) policy-clamp-ac-pf-ppnt | ssl.keystore.location = null simulator | 2024-01-15 15:27:20,525 INFO Session workerName=node0 policy-clamp-runtime-acm | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | =========|_|==============|___/=/_/_/_/ policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:36.584+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-db-migrator | Preparing upgrade release version: 1100 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:27:30.351+00:00|INFO|Application|main] No active profile set, falling back to 1 default profile: "default" policy-clamp-ac-pf-ppnt | ssl.keystore.password = null simulator | 2024-01-15 15:27:21,028 INFO Using GSON for REST calls policy-clamp-runtime-acm | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | :: Spring Boot :: (v3.1.4) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:36.616+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-db-migrator | Preparing upgrade release version: 1200 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:27:45.752+00:00|INFO|network|main] [OUT|KAFKA|policy-acruntime-participant] policy-clamp-ac-pf-ppnt | ssl.keystore.type = JKS simulator | 2024-01-15 15:27:21,112 INFO Started o.e.j.s.ServletContextHandler@5974109{/,null,AVAILABLE} policy-clamp-runtime-acm | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:36.618+00:00|INFO|StandardService|main] Starting service [Tomcat] kafka | min.insync.replicas = 1 kafka | node.id = 1 policy-clamp-ac-k8s-ppnt | {"participantSupportedElementType":[{"id":"7f97fcb8-2a7c-4f99-b027-f3849613ccbf","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"91bad685-f287-4001-b1dc-c480b4638a90","timestamp":"2024-01-15T15:27:45.690051924Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} policy-clamp-ac-pf-ppnt | ssl.protocol = TLSv1.3 simulator | 2024-01-15 15:27:21,122 INFO Started DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904} policy-clamp-runtime-acm | =========|_|==============|___/=/_/_/_/ policy-pap | [2024-01-15T15:27:59.826+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 40 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:36.618+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] kafka | num.io.threads = 8 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:27:47.101+00:00|INFO|Application|main] Started Application in 18.512 seconds (process running for 20.116) policy-clamp-ac-pf-ppnt | ssl.provider = null simulator | 2024-01-15 15:27:21,129 INFO Started Server@fd8294b{STARTING}[11.0.18,sto=0] @1764ms policy-clamp-runtime-acm | :: Spring Boot :: (v3.1.4) policy-pap | [2024-01-15T15:27:59.835+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:36.943+00:00|INFO|[/onap/policy/clamp/acm/simparticipant]|main] Initializing Spring embedded WebApplicationContext policy-clamp-ac-http-ppnt | default.api.timeout.ms = 60000 policy-clamp-ac-http-ppnt | enable.auto.commit = true kafka | num.network.threads = 3 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:06.544+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-ac-pf-ppnt | ssl.secure.random.implementation = null simulator | 2024-01-15 15:27:21,130 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=DMaaP simulator, host=0.0.0.0, port=3904, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@fd8294b{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@5974109{/,null,AVAILABLE}, connector=DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904}, jettyThread=Thread[DMaaP simulator-3904,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4352 ms. policy-clamp-runtime-acm | policy-pap | [2024-01-15T15:28:02.257+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:36.943+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 6411 ms policy-db-migrator | Preparing upgrade release version: 1300 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 policy-clamp-ac-k8s-ppnt | {"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"c20e92a6-a0cb-4d1c-b7de-8adb09ada244","timestamp":"2024-01-15T15:28:05.836283997Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} policy-clamp-ac-pf-ppnt | ssl.trustmanager.algorithm = PKIX policy-clamp-runtime-acm | [2024-01-15T15:28:15.752+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-01-15T15:28:02.446+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 165 ms. Found 7 JPA repository interfaces. policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:39.601+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:30.915+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] simulator | 2024-01-15 15:27:21,133 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION policy-clamp-ac-pf-ppnt | ssl.truststore.certificates = null policy-clamp-runtime-acm | [2024-01-15T15:28:15.851+00:00|INFO|Application|main] Starting Application using Java 17.0.9 with PID 58 (/app/app.jar started by policy in /opt/app/policy/clamp/bin) policy-pap | [2024-01-15T15:28:02.989+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-clamp-ac-sim-ppnt | allow.auto.create.topics = true kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 policy-clamp-ac-k8s-ppnt | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"552fa693-0f35-4f3d-bcba-48cfac49cb30","timestamp":"2024-01-15T15:28:30.853208592Z"} simulator | 2024-01-15 15:27:21,134 INFO org.onap.policy.models.simulators starting A&AI simulator policy-clamp-ac-pf-ppnt | ssl.truststore.location = null policy-clamp-runtime-acm | [2024-01-15T15:28:15.853+00:00|INFO|Application|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-01-15T15:28:02.990+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-clamp-ac-sim-ppnt | auto.commit.interval.ms = 5000 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:30.974+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] simulator | 2024-01-15 15:27:21,140 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4fc5e095{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@435871cb{/,null,STOPPED}, connector=A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-clamp-ac-pf-ppnt | ssl.truststore.password = null policy-clamp-runtime-acm | [2024-01-15T15:28:17.247+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-01-15T15:28:04.039+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-clamp-ac-sim-ppnt | auto.include.jmx.reporter = true kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"7f97fcb8-2a7c-4f99-b027-f3849613ccbf","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"ccd5cbb5-86b9-45b0-aa86-b9901ef300a9","timestamp":"2024-01-15T15:28:30.922233847Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} simulator | 2024-01-15 15:27:21,142 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4fc5e095{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@435871cb{/,null,STOPPED}, connector=A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-clamp-ac-pf-ppnt | ssl.truststore.type = JKS policy-clamp-runtime-acm | [2024-01-15T15:28:17.460+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 202 ms. Found 5 JPA repository interfaces. policy-pap | [2024-01-15T15:28:04.050+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-clamp-ac-sim-ppnt | auto.offset.reset = latest kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:30.992+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] simulator | 2024-01-15 15:27:21,145 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4fc5e095{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@435871cb{/,null,STOPPED}, connector=A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-clamp-ac-pf-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-runtime-acm | [2024-01-15T15:28:18.951+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.runtime.supervision.SupervisionAspect policy-pap | [2024-01-15T15:28:04.052+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-clamp-ac-sim-ppnt | bootstrap.servers = [kafka:9092] kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"d19a6d28-a7d0-4fb8-a2bf-addcffc2e329","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"c5714361-0311-4541-aeaa-881ea2ed50d9","timestamp":"2024-01-15T15:28:30.909550659Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} simulator | 2024-01-15 15:27:21,146 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-clamp-ac-pf-ppnt | policy-clamp-runtime-acm | [2024-01-15T15:28:19.661+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-01-15T15:28:04.052+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-clamp-ac-sim-ppnt | check.crcs = true kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:30.992+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] simulator | 2024-01-15 15:27:21,162 INFO Session workerName=node0 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.661+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-clamp-runtime-acm | [2024-01-15T15:28:19.673+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-01-15T15:28:04.159+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-clamp-ac-sim-ppnt | client.dns.lookup = use_all_dns_ips kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"8ccc2300-50ed-4075-a45b-c429651e9a40","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"d215a98e-7258-4498-8b3e-98d0866bfe7e","timestamp":"2024-01-15T15:28:30.907805883Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} simulator | 2024-01-15 15:27:21,282 INFO Using GSON for REST calls policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.661+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-clamp-runtime-acm | [2024-01-15T15:28:19.675+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-01-15T15:28:04.160+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 4223 ms policy-clamp-ac-sim-ppnt | client.id = consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 policy-clamp-ac-http-ppnt | exclude.internal.topics = true policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:31.005+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.662+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332485661 simulator | 2024-01-15 15:27:21,299 INFO Started o.e.j.s.ServletContextHandler@435871cb{/,null,AVAILABLE} policy-clamp-runtime-acm | [2024-01-15T15:28:19.675+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-pap | [2024-01-15T15:28:04.669+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-clamp-ac-sim-ppnt | client.rack = policy-db-migrator | Done kafka | quota.window.size.seconds = 1 policy-clamp-ac-http-ppnt | fetch.max.bytes = 52428800 policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"7f97fcb8-2a7c-4f99-b027-f3849613ccbf","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"ccd5cbb5-86b9-45b0-aa86-b9901ef300a9","timestamp":"2024-01-15T15:28:30.922233847Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.663+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Subscribed to topic(s): policy-acruntime-participant simulator | 2024-01-15 15:27:21,303 INFO Started A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-clamp-runtime-acm | [2024-01-15T15:28:19.794+00:00|INFO|[/onap/policy/clamp/acm]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-01-15T15:28:04.760+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-pap | [2024-01-15T15:28:04.763+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-pap | [2024-01-15T15:28:04.852+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-db-migrator | name version policy-pap | [2024-01-15T15:28:05.267+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:31.010+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | [2024-01-15T15:28:19.795+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3821 ms policy-clamp-runtime-acm | [2024-01-15T15:28:20.287+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-clamp-runtime-acm | [2024-01-15T15:28:20.349+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-clamp-runtime-acm | [2024-01-15T15:28:20.352+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-clamp-runtime-acm | [2024-01-15T15:28:20.390+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-clamp-runtime-acm | [2024-01-15T15:28:20.730+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-clamp-runtime-acm | [2024-01-15T15:28:20.747+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.664+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=400ffd83-9aaa-4908-a701-f225a47ce48f, alive=false, publisher=null]]: starting policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.709+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-clamp-ac-k8s-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"aa175f6c-327a-44d3-a931-143c53626cc3","timestamp":"2024-01-15T15:28:30.960765419Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-clamp-ac-http-ppnt | fetch.max.wait.ms = 500 policy-clamp-ac-http-ppnt | fetch.min.bytes = 1 policy-clamp-ac-http-ppnt | group.id = 75c1079f-283a-4d21-9ddf-3c97158a5ec8 policy-clamp-runtime-acm | [2024-01-15T15:28:20.847+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@6535117e policy-clamp-runtime-acm | [2024-01-15T15:28:20.849+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. kafka | remote.log.manager.task.interval.ms = 30000 policy-db-migrator | policyadmin 0 policy-clamp-ac-http-ppnt | group.instance.id = null policy-clamp-ac-http-ppnt | heartbeat.interval.ms = 3000 policy-clamp-ac-sim-ppnt | connections.max.idle.ms = 540000 policy-pap | [2024-01-15T15:28:05.291+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-clamp-ac-pf-ppnt | acks = -1 policy-clamp-runtime-acm | [2024-01-15T15:28:20.877+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:45.291+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-clamp-ac-http-ppnt | interceptor.classes = [] policy-clamp-ac-http-ppnt | internal.leave.group.on.close = true policy-clamp-ac-sim-ppnt | default.api.timeout.ms = 60000 policy-pap | [2024-01-15T15:28:05.411+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@188cbcde policy-clamp-ac-pf-ppnt | auto.include.jmx.reporter = true policy-clamp-runtime-acm | [2024-01-15T15:28:20.879+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-clamp-ac-k8s-ppnt | {"messageType":"PARTICIPANT_PRIME","messageId":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","timestamp":"2024-01-15T15:28:45.269562160Z","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d"} kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-db-migrator | upgrade: 0 -> 1300 policy-clamp-ac-http-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false policy-clamp-ac-http-ppnt | isolation.level = read_uncommitted policy-clamp-ac-sim-ppnt | enable.auto.commit = true policy-pap | [2024-01-15T15:28:05.414+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-clamp-ac-pf-ppnt | batch.size = 16384 policy-clamp-runtime-acm | [2024-01-15T15:28:22.355+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:45.299+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-acruntime-participant] kafka | remote.log.manager.task.retry.jitter = 0.2 policy-db-migrator | policy-clamp-ac-http-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-http-ppnt | max.partition.fetch.bytes = 1048576 policy-clamp-ac-sim-ppnt | exclude.internal.topics = true policy-pap | [2024-01-15T15:28:05.442+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-clamp-ac-pf-ppnt | bootstrap.servers = [kafka:9092] policy-clamp-runtime-acm | [2024-01-15T15:28:22.575+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} kafka | remote.log.manager.thread.pool.size = 10 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-clamp-ac-http-ppnt | max.poll.interval.ms = 300000 policy-clamp-ac-http-ppnt | max.poll.records = 500 policy-clamp-ac-sim-ppnt | fetch.max.bytes = 52428800 policy-pap | [2024-01-15T15:28:05.444+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-clamp-ac-pf-ppnt | buffer.memory = 33554432 policy-clamp-runtime-acm | [2024-01-15T15:28:23.112+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.models.acm.persistence.repository.AutomationCompositionRepository policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:45.304+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | remote.log.metadata.manager.class.name = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | metadata.max.age.ms = 300000 policy-clamp-ac-http-ppnt | metric.reporters = [] policy-clamp-ac-sim-ppnt | fetch.max.wait.ms = 500 policy-pap | [2024-01-15T15:28:07.664+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-clamp-ac-pf-ppnt | client.dns.lookup = use_all_dns_ips policy-clamp-runtime-acm | [2024-01-15T15:28:23.212+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.models.acm.persistence.repository.AutomationCompositionElementRepository policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} kafka | remote.log.metadata.manager.class.path = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-clamp-ac-http-ppnt | metrics.num.samples = 2 policy-clamp-ac-http-ppnt | metrics.recording.level = INFO policy-clamp-ac-sim-ppnt | fetch.min.bytes = 1 policy-pap | [2024-01-15T15:28:07.668+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-clamp-ac-pf-ppnt | client.id = producer-1 policy-clamp-runtime-acm | [2024-01-15T15:28:23.278+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.models.acm.persistence.repository.NodeTemplateStateRepository policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:45.309+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | remote.log.metadata.manager.impl.prefix = null policy-db-migrator | -------------- simulator | 2024-01-15 15:27:21,303 INFO Started Server@4fc5e095{STARTING}[11.0.18,sto=0] @1938ms policy-clamp-ac-http-ppnt | metrics.sample.window.ms = 30000 policy-clamp-ac-sim-ppnt | group.id = 845348fa-d712-41dc-bc31-ba3c79964bd7 policy-pap | [2024-01-15T15:28:08.337+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-clamp-ac-pf-ppnt | compression.type = none policy-clamp-runtime-acm | [2024-01-15T15:28:23.594+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} kafka | remote.log.metadata.manager.listener.name = null policy-db-migrator | simulator | 2024-01-15 15:27:21,304 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4fc5e095{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@435871cb{/,null,AVAILABLE}, connector=A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4841 ms. policy-clamp-ac-http-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-clamp-ac-sim-ppnt | group.instance.id = null policy-pap | [2024-01-15T15:28:09.047+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-clamp-ac-pf-ppnt | connections.max.idle.ms = 540000 policy-clamp-runtime-acm | allow.auto.create.topics = true policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:45.315+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | remote.log.reader.max.pending.tasks = 100 policy-db-migrator | simulator | 2024-01-15 15:27:21,305 INFO org.onap.policy.models.simulators starting SDNC simulator policy-clamp-ac-http-ppnt | receive.buffer.bytes = 65536 policy-clamp-ac-sim-ppnt | heartbeat.interval.ms = 3000 policy-pap | [2024-01-15T15:28:09.163+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-clamp-ac-pf-ppnt | delivery.timeout.ms = 120000 policy-clamp-runtime-acm | auto.commit.interval.ms = 5000 policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} kafka | remote.log.reader.threads = 10 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql simulator | 2024-01-15 15:27:21,308 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4bef0fe3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@62ea3440{/,null,STOPPED}, connector=SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-clamp-ac-http-ppnt | reconnect.backoff.max.ms = 1000 policy-clamp-ac-sim-ppnt | interceptor.classes = [] policy-pap | [2024-01-15T15:28:09.493+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-ac-pf-ppnt | enable.idempotence = true policy-clamp-runtime-acm | auto.include.jmx.reporter = true policy-clamp-ac-k8s-ppnt | [2024-01-15T15:28:45.407+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | remote.log.storage.manager.class.name = null policy-db-migrator | -------------- simulator | 2024-01-15 15:27:21,308 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4bef0fe3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@62ea3440{/,null,STOPPED}, connector=SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-clamp-ac-http-ppnt | reconnect.backoff.ms = 50 policy-clamp-ac-sim-ppnt | internal.leave.group.on.close = true policy-pap | allow.auto.create.topics = true policy-clamp-runtime-acm | auto.offset.reset = latest policy-clamp-ac-k8s-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90","state":"ON_LINE"} kafka | remote.log.storage.manager.class.path = null policy-clamp-ac-pf-ppnt | interceptor.classes = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) simulator | 2024-01-15 15:27:21,310 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4bef0fe3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@62ea3440{/,null,STOPPED}, connector=SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-clamp-ac-http-ppnt | request.timeout.ms = 30000 policy-clamp-ac-sim-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | auto.commit.interval.ms = 5000 policy-clamp-runtime-acm | bootstrap.servers = [kafka:9092] kafka | remote.log.storage.manager.impl.prefix = null policy-clamp-ac-pf-ppnt | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- simulator | 2024-01-15 15:27:21,311 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-clamp-ac-http-ppnt | retry.backoff.ms = 100 policy-clamp-ac-sim-ppnt | isolation.level = read_uncommitted policy-pap | auto.include.jmx.reporter = true policy-clamp-runtime-acm | check.crcs = true kafka | remote.log.storage.system.enable = false policy-clamp-ac-pf-ppnt | linger.ms = 0 policy-db-migrator | simulator | 2024-01-15 15:27:21,314 INFO Session workerName=node0 policy-clamp-ac-http-ppnt | sasl.client.callback.handler.class = null policy-clamp-ac-sim-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | auto.offset.reset = latest policy-clamp-runtime-acm | client.dns.lookup = use_all_dns_ips kafka | replica.fetch.backoff.ms = 1000 policy-clamp-ac-pf-ppnt | max.block.ms = 60000 policy-db-migrator | simulator | 2024-01-15 15:27:21,379 INFO Using GSON for REST calls policy-clamp-ac-http-ppnt | sasl.jaas.config = null policy-clamp-ac-sim-ppnt | max.partition.fetch.bytes = 1048576 policy-pap | bootstrap.servers = [kafka:9092] policy-clamp-runtime-acm | client.id = consumer-53971418-7c64-4b4a-8b2e-3deb55882781-1 kafka | replica.fetch.max.bytes = 1048576 policy-clamp-ac-pf-ppnt | max.in.flight.requests.per.connection = 5 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql simulator | 2024-01-15 15:27:21,388 INFO Started o.e.j.s.ServletContextHandler@62ea3440{/,null,AVAILABLE} policy-clamp-ac-http-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-ac-sim-ppnt | max.poll.interval.ms = 300000 policy-pap | check.crcs = true policy-clamp-runtime-acm | client.rack = kafka | replica.fetch.min.bytes = 1 policy-clamp-ac-pf-ppnt | max.request.size = 1048576 simulator | 2024-01-15 15:27:21,391 INFO Started SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} policy-clamp-ac-http-ppnt | sasl.kerberos.min.time.before.relogin = 60000 policy-clamp-ac-sim-ppnt | max.poll.records = 500 policy-pap | client.dns.lookup = use_all_dns_ips policy-clamp-runtime-acm | connections.max.idle.ms = 540000 kafka | replica.fetch.response.max.bytes = 10485760 policy-clamp-ac-pf-ppnt | metadata.max.age.ms = 300000 policy-db-migrator | -------------- simulator | 2024-01-15 15:27:21,391 INFO Started Server@4bef0fe3{STARTING}[11.0.18,sto=0] @2026ms policy-clamp-ac-http-ppnt | sasl.kerberos.service.name = null policy-clamp-ac-sim-ppnt | metadata.max.age.ms = 300000 policy-pap | client.id = consumer-aa559ce3-1840-4027-b443-4b66dabb9280-1 policy-clamp-runtime-acm | default.api.timeout.ms = 60000 kafka | replica.fetch.wait.max.ms = 500 policy-clamp-ac-pf-ppnt | metadata.max.idle.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 policy-clamp-ac-sim-ppnt | metric.reporters = [] policy-pap | client.rack = policy-clamp-runtime-acm | enable.auto.commit = true kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-clamp-ac-pf-ppnt | metric.reporters = [] simulator | 2024-01-15 15:27:21,391 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4bef0fe3{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@62ea3440{/,null,AVAILABLE}, connector=SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4919 ms. policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-clamp-ac-sim-ppnt | metrics.num.samples = 2 policy-pap | connections.max.idle.ms = 540000 policy-clamp-runtime-acm | exclude.internal.topics = true kafka | replica.lag.time.max.ms = 30000 policy-clamp-ac-pf-ppnt | metrics.num.samples = 2 simulator | 2024-01-15 15:27:21,392 INFO org.onap.policy.models.simulators starting SO simulator policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.login.callback.handler.class = null policy-clamp-ac-sim-ppnt | metrics.recording.level = INFO policy-pap | default.api.timeout.ms = 60000 policy-clamp-runtime-acm | fetch.max.bytes = 52428800 kafka | replica.selector.class = null policy-clamp-ac-pf-ppnt | metrics.recording.level = INFO simulator | 2024-01-15 15:27:21,395 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@5bfa8cc5{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@666b83a4{/,null,STOPPED}, connector=SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-clamp-ac-http-ppnt | sasl.login.class = null policy-clamp-ac-sim-ppnt | metrics.sample.window.ms = 30000 policy-pap | enable.auto.commit = true policy-clamp-runtime-acm | fetch.max.wait.ms = 500 kafka | replica.socket.receive.buffer.bytes = 65536 policy-clamp-ac-pf-ppnt | metrics.sample.window.ms = 30000 simulator | 2024-01-15 15:27:21,397 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@5bfa8cc5{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@666b83a4{/,null,STOPPED}, connector=SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.login.connect.timeout.ms = null policy-clamp-ac-sim-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | exclude.internal.topics = true policy-clamp-runtime-acm | fetch.min.bytes = 1 kafka | replica.socket.timeout.ms = 30000 policy-clamp-ac-pf-ppnt | partitioner.adaptive.partitioning.enable = true simulator | 2024-01-15 15:27:21,408 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@5bfa8cc5{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@666b83a4{/,null,STOPPED}, connector=SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-clamp-ac-http-ppnt | sasl.login.read.timeout.ms = null policy-clamp-ac-sim-ppnt | receive.buffer.bytes = 65536 policy-pap | fetch.max.bytes = 52428800 policy-clamp-runtime-acm | group.id = 53971418-7c64-4b4a-8b2e-3deb55882781 kafka | replication.quota.window.num = 11 policy-clamp-ac-pf-ppnt | partitioner.availability.timeout.ms = 0 simulator | 2024-01-15 15:27:21,409 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-clamp-ac-sim-ppnt | reconnect.backoff.max.ms = 1000 policy-pap | fetch.max.wait.ms = 500 policy-clamp-runtime-acm | group.instance.id = null kafka | replication.quota.window.size.seconds = 1 policy-clamp-ac-pf-ppnt | partitioner.class = null simulator | 2024-01-15 15:27:21,414 INFO Session workerName=node0 policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.login.refresh.min.period.seconds = 60 policy-clamp-ac-sim-ppnt | reconnect.backoff.ms = 50 policy-pap | fetch.min.bytes = 1 policy-clamp-runtime-acm | heartbeat.interval.ms = 3000 kafka | request.timeout.ms = 30000 policy-clamp-ac-pf-ppnt | partitioner.ignore.keys = false simulator | 2024-01-15 15:27:21,471 INFO Using GSON for REST calls policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.login.refresh.window.factor = 0.8 policy-clamp-ac-sim-ppnt | request.timeout.ms = 30000 policy-pap | group.id = aa559ce3-1840-4027-b443-4b66dabb9280 policy-clamp-runtime-acm | interceptor.classes = [] kafka | reserved.broker.max.id = 1000 policy-clamp-ac-pf-ppnt | receive.buffer.bytes = 32768 simulator | 2024-01-15 15:27:21,485 INFO Started o.e.j.s.ServletContextHandler@666b83a4{/,null,AVAILABLE} policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-clamp-ac-http-ppnt | sasl.login.refresh.window.jitter = 0.05 policy-clamp-ac-sim-ppnt | retry.backoff.ms = 100 policy-pap | group.instance.id = null policy-clamp-runtime-acm | internal.leave.group.on.close = true kafka | sasl.client.callback.handler.class = null policy-clamp-ac-pf-ppnt | reconnect.backoff.max.ms = 1000 simulator | 2024-01-15 15:27:21,502 INFO Started SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.max.ms = 10000 policy-clamp-ac-sim-ppnt | sasl.client.callback.handler.class = null policy-pap | heartbeat.interval.ms = 3000 policy-clamp-runtime-acm | internal.throw.on.fetch.stable.offset.unsupported = false kafka | sasl.enabled.mechanisms = [GSSAPI] policy-clamp-ac-pf-ppnt | reconnect.backoff.ms = 50 simulator | 2024-01-15 15:27:21,503 INFO Started Server@5bfa8cc5{STARTING}[11.0.18,sto=0] @2138ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.ms = 100 policy-clamp-ac-sim-ppnt | sasl.jaas.config = null policy-pap | interceptor.classes = [] policy-clamp-runtime-acm | isolation.level = read_uncommitted kafka | sasl.jaas.config = null policy-clamp-ac-pf-ppnt | request.timeout.ms = 30000 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.mechanism = GSSAPI policy-clamp-ac-sim-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | internal.leave.group.on.close = true policy-clamp-runtime-acm | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-ac-pf-ppnt | retries = 2147483647 simulator | 2024-01-15 15:27:21,503 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@5bfa8cc5{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@666b83a4{/,null,AVAILABLE}, connector=SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4898 ms. policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-ac-sim-ppnt | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-clamp-runtime-acm | max.partition.fetch.bytes = 1048576 kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-clamp-ac-pf-ppnt | retry.backoff.ms = 100 simulator | 2024-01-15 15:27:21,505 INFO org.onap.policy.models.simulators starting VFC simulator policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.audience = null policy-clamp-ac-sim-ppnt | sasl.kerberos.service.name = null policy-pap | isolation.level = read_uncommitted policy-clamp-runtime-acm | max.poll.interval.ms = 300000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-clamp-ac-pf-ppnt | sasl.client.callback.handler.class = null simulator | 2024-01-15 15:27:21,509 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2b95e48b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4a3329b9{/,null,STOPPED}, connector=VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.issuer = null policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-runtime-acm | max.poll.records = 500 kafka | sasl.kerberos.service.name = null policy-clamp-ac-pf-ppnt | sasl.jaas.config = null simulator | 2024-01-15 15:27:21,510 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2b95e48b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4a3329b9{/,null,STOPPED}, connector=VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | max.partition.fetch.bytes = 1048576 policy-clamp-runtime-acm | metadata.max.age.ms = 300000 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-clamp-ac-pf-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit simulator | 2024-01-15 15:27:21,513 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2b95e48b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4a3329b9{/,null,STOPPED}, connector=VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-sim-ppnt | sasl.login.callback.handler.class = null policy-pap | max.poll.interval.ms = 300000 policy-clamp-runtime-acm | metric.reporters = [] kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-clamp-ac-pf-ppnt | sasl.kerberos.min.time.before.relogin = 60000 simulator | 2024-01-15 15:27:21,515 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-ac-sim-ppnt | sasl.login.class = null policy-pap | max.poll.records = 500 policy-clamp-runtime-acm | metrics.num.samples = 2 kafka | sasl.login.callback.handler.class = null policy-clamp-ac-pf-ppnt | sasl.kerberos.service.name = null simulator | 2024-01-15 15:27:21,518 INFO Session workerName=node0 policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-ac-sim-ppnt | sasl.login.connect.timeout.ms = null policy-pap | metadata.max.age.ms = 300000 policy-clamp-runtime-acm | metrics.recording.level = INFO kafka | sasl.login.class = null policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 simulator | 2024-01-15 15:27:21,553 INFO Using GSON for REST calls policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.oauthbearer.scope.claim.name = scope policy-clamp-ac-sim-ppnt | sasl.login.read.timeout.ms = null policy-pap | metric.reporters = [] policy-clamp-runtime-acm | metrics.sample.window.ms = 30000 kafka | sasl.login.connect.timeout.ms = null policy-clamp-ac-pf-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 simulator | 2024-01-15 15:27:21,560 INFO Started o.e.j.s.ServletContextHandler@4a3329b9{/,null,AVAILABLE} policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-clamp-ac-http-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-clamp-ac-sim-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-pap | metrics.num.samples = 2 policy-clamp-runtime-acm | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | sasl.login.read.timeout.ms = null policy-clamp-ac-pf-ppnt | sasl.login.callback.handler.class = null simulator | 2024-01-15 15:27:21,564 INFO Started VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-clamp-ac-sim-ppnt | sasl.login.refresh.min.period.seconds = 60 policy-pap | metrics.recording.level = INFO policy-clamp-runtime-acm | receive.buffer.bytes = 65536 kafka | sasl.login.refresh.buffer.seconds = 300 policy-clamp-ac-pf-ppnt | sasl.login.class = null simulator | 2024-01-15 15:27:21,564 INFO Started Server@2b95e48b{STARTING}[11.0.18,sto=0] @2199ms policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.factor = 0.8 policy-pap | metrics.sample.window.ms = 30000 policy-clamp-runtime-acm | reconnect.backoff.max.ms = 1000 kafka | sasl.login.refresh.min.period.seconds = 60 policy-clamp-ac-pf-ppnt | sasl.login.connect.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | security.protocol = PLAINTEXT simulator | 2024-01-15 15:27:21,565 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2b95e48b{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4a3329b9{/,null,AVAILABLE}, connector=VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4948 ms. policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.jitter = 0.05 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-clamp-runtime-acm | reconnect.backoff.ms = 50 kafka | sasl.login.refresh.window.factor = 0.8 policy-clamp-ac-pf-ppnt | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | security.providers = null simulator | 2024-01-15 15:27:21,565 INFO org.onap.policy.models.simulators starting Sink appc-cl policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.max.ms = 10000 policy-pap | receive.buffer.bytes = 65536 policy-clamp-runtime-acm | request.timeout.ms = 30000 kafka | sasl.login.refresh.window.jitter = 0.05 policy-clamp-ac-pf-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | policy-clamp-ac-http-ppnt | send.buffer.bytes = 131072 simulator | 2024-01-15 15:27:21,578 INFO InlineDmaapTopicSink [userName=null, password=null, getTopicCommInfrastructure()=DMAAP, toString()=InlineBusTopicSink [partitionId=ed4e3d7f-7673-4b42-a00f-e4f60677ed98, alive=false, publisher=null]]: starting policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.ms = 100 policy-pap | reconnect.backoff.max.ms = 1000 policy-clamp-runtime-acm | retry.backoff.ms = 100 kafka | sasl.login.retry.backoff.max.ms = 10000 policy-clamp-ac-pf-ppnt | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | policy-clamp-ac-http-ppnt | session.timeout.ms = 45000 simulator | 2024-01-15 15:27:21,909 INFO InlineDmaapTopicSink [userName=null, password=null, getTopicCommInfrastructure()=DMAAP, toString()=InlineBusTopicSink [partitionId=ed4e3d7f-7673-4b42-a00f-e4f60677ed98, alive=false, publisher=CambriaPublisherWrapper []]]: DMAAP SINK created policy-clamp-ac-sim-ppnt | sasl.mechanism = GSSAPI policy-pap | reconnect.backoff.ms = 50 policy-clamp-runtime-acm | sasl.client.callback.handler.class = null kafka | sasl.login.retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.max.ms = 30000 simulator | 2024-01-15 15:27:21,909 INFO org.onap.policy.models.simulators starting Sink appc-lcm-write policy-clamp-ac-sim-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | request.timeout.ms = 30000 policy-clamp-runtime-acm | sasl.jaas.config = null kafka | sasl.mechanism.controller.protocol = GSSAPI policy-clamp-ac-pf-ppnt | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.ms = 10000 simulator | 2024-01-15 15:27:21,909 INFO InlineDmaapTopicSink [userName=null, password=null, getTopicCommInfrastructure()=DMAAP, toString()=InlineBusTopicSink [partitionId=a24be4ae-5f4f-414c-836b-bc7feac196c5, alive=false, publisher=null]]: starting policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.audience = null policy-pap | retry.backoff.ms = 100 policy-clamp-runtime-acm | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-clamp-ac-http-ppnt | ssl.cipher.suites = null simulator | 2024-01-15 15:27:21,910 INFO InlineDmaapTopicSink [userName=null, password=null, getTopicCommInfrastructure()=DMAAP, toString()=InlineBusTopicSink [partitionId=a24be4ae-5f4f-414c-836b-bc7feac196c5, alive=false, publisher=CambriaPublisherWrapper []]]: DMAAP SINK created policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.client.callback.handler.class = null policy-clamp-runtime-acm | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-ac-pf-ppnt | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] simulator | 2024-01-15 15:27:21,910 INFO org.onap.policy.models.simulators starting Source appc-cl policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.jaas.config = null policy-clamp-runtime-acm | sasl.kerberos.service.name = null kafka | sasl.oauthbearer.expected.audience = null policy-clamp-ac-pf-ppnt | sasl.mechanism = GSSAPI policy-db-migrator | policy-clamp-ac-http-ppnt | ssl.endpoint.identification.algorithm = https simulator | 2024-01-15 15:27:21,914 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=22829b8b-7eda-4280-8fbb-124377a1495c, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=0]]]]: INITTED policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.oauthbearer.expected.issuer = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | policy-clamp-ac-http-ppnt | ssl.engine.factory.class = null simulator | 2024-01-15 15:27:21,937 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=22829b8b-7eda-4280-8fbb-124377a1495c, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.audience = null policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-clamp-ac-http-ppnt | ssl.key.password = null simulator | 2024-01-15 15:27:21,938 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=22829b8b-7eda-4280-8fbb-124377a1495c, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=0]]]]: INITTED policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.kerberos.service.name = null kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.expected.issuer = null policy-clamp-runtime-acm | sasl.login.callback.handler.class = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | ssl.keymanager.algorithm = SunX509 simulator | 2024-01-15 15:27:21,938 INFO org.onap.policy.models.simulators starting Source appc-lcm-read policy-clamp-ac-sim-ppnt | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-runtime-acm | sasl.login.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | ssl.keystore.certificate.chain = null simulator | 2024-01-15 15:27:21,938 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=bc73ad39-2e41-4278-8095-9ebf50e479e5, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=0]]]]: INITTED policy-clamp-ac-sim-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-runtime-acm | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | ssl.keystore.key = null simulator | 2024-01-15 15:27:21,939 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=bc73ad39-2e41-4278-8095-9ebf50e479e5, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-clamp-ac-sim-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-pap | sasl.login.callback.handler.class = null kafka | sasl.oauthbearer.scope.claim.name = scope policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-runtime-acm | sasl.login.read.timeout.ms = null policy-db-migrator | policy-clamp-ac-http-ppnt | ssl.keystore.location = null simulator | 2024-01-15 15:27:21,939 INFO UEB GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator policy-clamp-ac-sim-ppnt | security.protocol = PLAINTEXT policy-pap | sasl.login.class = null kafka | sasl.oauthbearer.sub.claim.name = sub policy-clamp-ac-pf-ppnt | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-runtime-acm | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | policy-clamp-ac-http-ppnt | ssl.keystore.password = null simulator | 2024-01-15 15:27:21,939 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=bc73ad39-2e41-4278-8095-9ebf50e479e5, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=0]]]]: INITTED policy-clamp-ac-sim-ppnt | security.providers = null policy-pap | sasl.login.connect.timeout.ms = null kafka | sasl.oauthbearer.token.endpoint.url = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.scope.claim.name = scope policy-clamp-runtime-acm | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-clamp-ac-http-ppnt | ssl.keystore.type = JKS simulator | 2024-01-15 15:27:21,939 INFO org.onap.policy.models.simulators starting APPC Legacy simulator policy-clamp-ac-sim-ppnt | send.buffer.bytes = 131072 policy-pap | sasl.login.read.timeout.ms = null kafka | sasl.server.callback.handler.class = null policy-clamp-ac-pf-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-clamp-runtime-acm | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | ssl.protocol = TLSv1.3 simulator | 2024-01-15 15:27:21,939 INFO UEB GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator policy-clamp-ac-sim-ppnt | session.timeout.ms = 45000 policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.server.max.receive.size = 524288 policy-clamp-ac-pf-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-clamp-runtime-acm | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | ssl.provider = null simulator | 2024-01-15 15:27:21,941 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=22829b8b-7eda-4280-8fbb-124377a1495c, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=true, locked=false, uebThread=Thread[DMAAP-source-appc-cl,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.simulators.AppcLegacyTopicServer@1c4ee95c policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.max.ms = 30000 policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | security.inter.broker.protocol = PLAINTEXT policy-clamp-ac-pf-ppnt | security.protocol = PLAINTEXT policy-clamp-runtime-acm | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | ssl.secure.random.implementation = null simulator | 2024-01-15 15:27:21,942 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=22829b8b-7eda-4280-8fbb-124377a1495c, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=true, locked=false, uebThread=Thread[DMAAP-source-appc-cl,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.ms = 10000 kafka | security.providers = null policy-clamp-ac-pf-ppnt | security.providers = null policy-pap | sasl.login.refresh.window.factor = 0.8 policy-clamp-runtime-acm | sasl.login.retry.backoff.ms = 100 policy-db-migrator | policy-clamp-ac-http-ppnt | ssl.trustmanager.algorithm = PKIX simulator | 2024-01-15 15:27:21,942 INFO org.onap.policy.models.simulators starting appc-lcm-simulator policy-clamp-ac-sim-ppnt | ssl.cipher.suites = null kafka | server.max.startup.time.ms = 9223372036854775807 policy-clamp-ac-pf-ppnt | send.buffer.bytes = 131072 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-clamp-runtime-acm | sasl.mechanism = GSSAPI policy-db-migrator | policy-clamp-ac-http-ppnt | ssl.truststore.certificates = null simulator | 2024-01-15 15:27:21,942 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=bc73ad39-2e41-4278-8095-9ebf50e479e5, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=true, locked=false, uebThread=Thread[DMAAP-source-appc-lcm-read,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.simulators.AppcLcmTopicServer@49bd54f7 policy-clamp-ac-sim-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.max.ms = 30000 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-clamp-runtime-acm | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-clamp-ac-http-ppnt | ssl.truststore.location = null simulator | 2024-01-15 15:27:21,943 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=bc73ad39-2e41-4278-8095-9ebf50e479e5, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=true, locked=false, uebThread=Thread[DMAAP-source-appc-lcm-read,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-clamp-ac-sim-ppnt | ssl.endpoint.identification.algorithm = https kafka | socket.connection.setup.timeout.ms = 10000 policy-clamp-ac-pf-ppnt | socket.connection.setup.timeout.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-clamp-runtime-acm | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | ssl.truststore.password = null simulator | 2024-01-15 15:27:21,943 INFO org.onap.policy.models.simulators started policy-clamp-ac-sim-ppnt | ssl.engine.factory.class = null kafka | socket.listen.backlog.size = 50 policy-clamp-ac-pf-ppnt | ssl.cipher.suites = null policy-pap | sasl.mechanism = GSSAPI policy-clamp-runtime-acm | sasl.oauthbearer.expected.issuer = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | ssl.truststore.type = JKS simulator | 2024-01-15 15:27:21,959 WARN GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator will send credentials over a clear channel. policy-clamp-ac-sim-ppnt | ssl.key.password = null kafka | socket.receive.buffer.bytes = 102400 policy-clamp-ac-pf-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer simulator | 2024-01-15 15:27:21,959 WARN GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator will send credentials over a clear channel. policy-clamp-ac-sim-ppnt | ssl.keymanager.algorithm = SunX509 kafka | socket.request.max.bytes = 104857600 policy-clamp-ac-pf-ppnt | ssl.endpoint.identification.algorithm = https policy-pap | sasl.oauthbearer.expected.audience = null policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | policy-clamp-ac-http-ppnt | simulator | 2024-01-15 15:27:21,959 INFO GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator (as some-key) ... policy-clamp-ac-sim-ppnt | ssl.keystore.certificate.chain = null kafka | socket.send.buffer.bytes = 102400 policy-clamp-ac-pf-ppnt | ssl.engine.factory.class = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.239+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 simulator | 2024-01-15 15:27:21,959 INFO GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator (as some-key) ... policy-clamp-ac-sim-ppnt | ssl.keystore.key = null kafka | ssl.cipher.suites = [] policy-clamp-ac-pf-ppnt | ssl.key.password = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.239+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a simulator | 2024-01-15 15:27:22,059 INFO Topic appc-lcm-read: added policy-clamp-ac-sim-ppnt | ssl.keystore.location = null kafka | ssl.client.auth = none policy-clamp-ac-pf-ppnt | ssl.keymanager.algorithm = SunX509 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-runtime-acm | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.239+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332464222 simulator | 2024-01-15 15:27:22,059 INFO Topic appc-cl: added policy-clamp-ac-sim-ppnt | ssl.keystore.password = null kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-ac-pf-ppnt | ssl.keystore.certificate.chain = null policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-runtime-acm | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.247+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Subscribed to topic(s): policy-acruntime-participant simulator | 2024-01-15 15:27:22,060 INFO Topic appc-lcm-read: add consumer group: bc73ad39-2e41-4278-8095-9ebf50e479e5 policy-clamp-ac-sim-ppnt | ssl.keystore.type = JKS kafka | ssl.endpoint.identification.algorithm = https policy-clamp-ac-pf-ppnt | ssl.keystore.key = null policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-runtime-acm | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.275+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=68d87dd1-f764-4706-9a83-bc8fb3f5fe68, alive=false, publisher=null]]: starting simulator | 2024-01-15 15:27:22,060 INFO Topic appc-cl: add consumer group: 22829b8b-7eda-4280-8fbb-124377a1495c policy-clamp-ac-sim-ppnt | ssl.protocol = TLSv1.3 kafka | ssl.engine.factory.class = null policy-clamp-ac-pf-ppnt | ssl.keystore.location = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-clamp-runtime-acm | security.protocol = PLAINTEXT policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.426+00:00|INFO|ProducerConfig|main] ProducerConfig values: simulator | 2024-01-15 15:27:37,088 INFO --> HTTP/1.1 200 OK policy-clamp-ac-sim-ppnt | ssl.provider = null kafka | ssl.key.password = null policy-clamp-ac-pf-ppnt | ssl.keystore.password = null policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-clamp-runtime-acm | security.providers = null policy-db-migrator | policy-clamp-ac-http-ppnt | acks = -1 simulator | 2024-01-15 15:27:37,089 INFO --> HTTP/1.1 200 OK policy-clamp-ac-sim-ppnt | ssl.secure.random.implementation = null kafka | ssl.keymanager.algorithm = SunX509 policy-clamp-ac-pf-ppnt | ssl.keystore.type = JKS policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-clamp-runtime-acm | send.buffer.bytes = 131072 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-clamp-ac-http-ppnt | auto.include.jmx.reporter = true simulator | 2024-01-15 15:27:37,094 INFO 172.17.0.2 - - [15/Jan/2024:15:27:22 +0000] "GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-clamp-ac-sim-ppnt | ssl.trustmanager.algorithm = PKIX kafka | ssl.keystore.certificate.chain = null policy-clamp-ac-pf-ppnt | ssl.protocol = TLSv1.3 policy-pap | security.protocol = PLAINTEXT policy-clamp-runtime-acm | session.timeout.ms = 45000 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | batch.size = 16384 simulator | 2024-01-15 15:27:37,097 INFO UEB GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator policy-clamp-ac-sim-ppnt | ssl.truststore.certificates = null kafka | ssl.keystore.key = null policy-clamp-ac-pf-ppnt | ssl.provider = null policy-pap | security.providers = null policy-clamp-runtime-acm | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | bootstrap.servers = [kafka:9092] simulator | 2024-01-15 15:27:37,097 WARN GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator will send credentials over a clear channel. policy-clamp-ac-sim-ppnt | ssl.truststore.location = null kafka | ssl.keystore.location = null policy-clamp-ac-pf-ppnt | ssl.secure.random.implementation = null policy-pap | send.buffer.bytes = 131072 policy-clamp-runtime-acm | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | buffer.memory = 33554432 simulator | 2024-01-15 15:27:37,098 INFO GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator (as some-key) ... policy-clamp-ac-sim-ppnt | ssl.truststore.password = null kafka | ssl.keystore.password = null policy-clamp-ac-pf-ppnt | ssl.trustmanager.algorithm = PKIX policy-pap | session.timeout.ms = 45000 policy-clamp-runtime-acm | ssl.cipher.suites = null policy-db-migrator | policy-clamp-ac-http-ppnt | client.dns.lookup = use_all_dns_ips simulator | 2024-01-15 15:27:37,097 INFO UEB GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator simulator | 2024-01-15 15:27:37,100 WARN GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator will send credentials over a clear channel. kafka | ssl.keystore.type = JKS policy-clamp-ac-pf-ppnt | ssl.truststore.certificates = null policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-runtime-acm | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | policy-clamp-ac-http-ppnt | client.id = producer-1 policy-clamp-ac-sim-ppnt | ssl.truststore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT simulator | 2024-01-15 15:27:37,100 INFO 172.17.0.2 - - [15/Jan/2024:15:27:22 +0000] "GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-clamp-ac-pf-ppnt | ssl.truststore.location = null policy-pap | socket.connection.setup.timeout.ms = 10000 policy-clamp-runtime-acm | ssl.endpoint.identification.algorithm = https policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-clamp-ac-http-ppnt | compression.type = none policy-clamp-ac-sim-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | ssl.protocol = TLSv1.3 simulator | 2024-01-15 15:27:37,100 INFO GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator (as some-key) ... policy-clamp-ac-pf-ppnt | ssl.truststore.password = null policy-pap | ssl.cipher.suites = null policy-clamp-runtime-acm | ssl.engine.factory.class = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | connections.max.idle.ms = 540000 policy-clamp-ac-sim-ppnt | kafka | ssl.provider = null simulator | 2024-01-15 15:27:52,106 INFO 172.17.0.2 - - [15/Jan/2024:15:27:37 +0000] "GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-clamp-ac-pf-ppnt | ssl.truststore.type = JKS policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-runtime-acm | ssl.key.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | delivery.timeout.ms = 120000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:39.900+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | ssl.secure.random.implementation = null simulator | 2024-01-15 15:27:52,107 INFO --> HTTP/1.1 200 OK policy-clamp-ac-pf-ppnt | transaction.timeout.ms = 60000 policy-pap | ssl.endpoint.identification.algorithm = https policy-clamp-runtime-acm | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | enable.idempotence = true policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:39.900+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | ssl.trustmanager.algorithm = PKIX simulator | 2024-01-15 15:27:52,107 INFO --> HTTP/1.1 200 OK policy-clamp-ac-pf-ppnt | transactional.id = null policy-pap | ssl.engine.factory.class = null policy-clamp-runtime-acm | ssl.keystore.certificate.chain = null policy-db-migrator | policy-clamp-ac-http-ppnt | interceptor.classes = [] policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:39.900+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332459898 kafka | ssl.truststore.certificates = null simulator | 2024-01-15 15:27:52,107 INFO UEB GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator policy-clamp-ac-pf-ppnt | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | ssl.key.password = null policy-clamp-runtime-acm | ssl.keystore.key = null policy-db-migrator | policy-clamp-ac-http-ppnt | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:39.909+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-1, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Subscribed to topic(s): policy-acruntime-participant kafka | ssl.truststore.location = null simulator | 2024-01-15 15:27:52,107 INFO UEB GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator policy-clamp-ac-pf-ppnt | policy-pap | ssl.keymanager.algorithm = SunX509 policy-clamp-runtime-acm | ssl.keystore.location = null policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-clamp-ac-http-ppnt | linger.ms = 0 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:39.961+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.participant.sim.config.MicrometerConfig kafka | ssl.truststore.password = null simulator | 2024-01-15 15:27:52,108 WARN GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator will send credentials over a clear channel. policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.757+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | ssl.keystore.certificate.chain = null policy-clamp-runtime-acm | ssl.keystore.password = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | max.block.ms = 60000 kafka | ssl.truststore.type = JKS policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:40.868+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2b10ace9, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@52169758, org.springframework.security.web.context.SecurityContextHolderFilter@1702830d, org.springframework.security.web.header.HeaderWriterFilter@8deb645, org.springframework.security.web.authentication.logout.LogoutFilter@2b34e38c, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@7cea0110, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@278667fd, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6b52dd31, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3eda0aeb, org.springframework.security.web.access.ExceptionTranslationFilter@6c6333cd, org.springframework.security.web.access.intercept.AuthorizationFilter@1c9e07c6] simulator | 2024-01-15 15:27:52,108 WARN GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator will send credentials over a clear channel. policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.828+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | ssl.keystore.key = null policy-clamp-runtime-acm | ssl.keystore.type = JKS policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | max.in.flight.requests.per.connection = 5 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.074+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' simulator | 2024-01-15 15:27:52,108 INFO GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator (as some-key) ... policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.828+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | ssl.keystore.location = null policy-clamp-runtime-acm | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | max.request.size = 1048576 kafka | transaction.max.timeout.ms = 900000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.189+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] simulator | 2024-01-15 15:27:52,108 INFO GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator (as some-key) ... policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.828+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332485828 policy-pap | ssl.keystore.password = null policy-clamp-runtime-acm | ssl.provider = null policy-db-migrator | policy-clamp-ac-http-ppnt | metadata.max.age.ms = 300000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.265+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/onap/policy/clamp/acm/simparticipant' simulator | 2024-01-15 15:27:52,109 INFO 172.17.0.2 - - [15/Jan/2024:15:27:37 +0000] "GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.829+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=400ffd83-9aaa-4908-a701-f225a47ce48f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | ssl.keystore.type = JKS policy-clamp-runtime-acm | ssl.secure.random.implementation = null policy-db-migrator | policy-clamp-ac-http-ppnt | metadata.max.idle.ms = 300000 kafka | transaction.state.log.load.buffer.size = 5242880 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.310+00:00|INFO|ServiceManager|main] service manager starting simulator | 2024-01-15 15:28:07,112 INFO 172.17.0.2 - - [15/Jan/2024:15:27:52 +0000] "GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.829+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantMessagePublisher$$SpringCGLIB$$0 policy-pap | ssl.protocol = TLSv1.3 policy-clamp-runtime-acm | ssl.trustmanager.algorithm = PKIX policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-clamp-ac-http-ppnt | metric.reporters = [] kafka | transaction.state.log.min.isr = 2 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.310+00:00|INFO|ServiceManager|main] service manager starting Topic endpoint management simulator | 2024-01-15 15:28:07,113 INFO --> HTTP/1.1 200 OK policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.834+00:00|INFO|ServiceManager|main] service manager starting Listener AcPropertyUpdateListener policy-pap | ssl.provider = null policy-clamp-runtime-acm | ssl.truststore.certificates = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | metrics.num.samples = 2 kafka | transaction.state.log.num.partitions = 50 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.333+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=845348fa-d712-41dc-bc31-ba3c79964bd7, consumerInstance=policy-clamp-ac-sim-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting simulator | 2024-01-15 15:28:07,114 INFO UEB GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantPrimeListener policy-pap | ssl.secure.random.implementation = null policy-clamp-runtime-acm | ssl.truststore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | metrics.recording.level = INFO kafka | transaction.state.log.replication.factor = 3 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.378+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: simulator | 2024-01-15 15:28:07,114 INFO 172.17.0.2 - - [15/Jan/2024:15:27:52 +0000] "GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionMigrationListener policy-pap | ssl.trustmanager.algorithm = PKIX policy-clamp-runtime-acm | ssl.truststore.password = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | metrics.sample.window.ms = 30000 kafka | transaction.state.log.segment.bytes = 104857600 policy-clamp-ac-sim-ppnt | allow.auto.create.topics = true simulator | 2024-01-15 15:28:07,115 WARN GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator will send credentials over a clear channel. policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRestartListener policy-pap | ssl.truststore.certificates = null policy-clamp-runtime-acm | ssl.truststore.type = JKS policy-db-migrator | policy-clamp-ac-http-ppnt | partitioner.adaptive.partitioning.enable = true kafka | transactional.id.expiration.ms = 604800000 policy-clamp-ac-sim-ppnt | auto.commit.interval.ms = 5000 simulator | 2024-01-15 15:28:07,115 INFO GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator (as some-key) ... policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRegisterAckListener policy-pap | ssl.truststore.location = null policy-clamp-runtime-acm | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | policy-clamp-ac-http-ppnt | partitioner.availability.timeout.ms = 0 kafka | unclean.leader.election.enable = false policy-clamp-ac-sim-ppnt | auto.include.jmx.reporter = true simulator | 2024-01-15 15:28:07,116 INFO --> HTTP/1.1 200 OK policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantStatusReqListener policy-pap | ssl.truststore.password = null policy-clamp-runtime-acm | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-clamp-ac-http-ppnt | partitioner.class = null kafka | unstable.api.versions.enable = false policy-clamp-ac-sim-ppnt | auto.offset.reset = latest simulator | 2024-01-15 15:28:07,116 INFO UEB GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionStateChangeListener policy-pap | ssl.truststore.type = JKS policy-clamp-runtime-acm | [2024-01-15T15:28:23.737+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | partitioner.ignore.keys = false kafka | zookeeper.clientCnxnSocket = null policy-clamp-ac-sim-ppnt | bootstrap.servers = [kafka:9092] simulator | 2024-01-15 15:28:07,117 WARN GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator will send credentials over a clear channel. policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionDeployListener policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-runtime-acm | [2024-01-15T15:28:23.737+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | receive.buffer.bytes = 32768 kafka | zookeeper.connect = zookeeper:2181 policy-clamp-ac-sim-ppnt | check.crcs = true simulator | 2024-01-15 15:28:07,117 INFO GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator (as some-key) ... policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantDeregisterAckListener policy-pap | policy-clamp-runtime-acm | [2024-01-15T15:28:23.737+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332503735 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | reconnect.backoff.max.ms = 1000 kafka | zookeeper.connection.timeout.ms = null policy-clamp-ac-sim-ppnt | client.dns.lookup = use_all_dns_ips simulator | 2024-01-15 15:28:22,121 INFO 172.17.0.2 - - [15/Jan/2024:15:28:07 +0000] "GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager starting Topic Message Dispatcher policy-pap | [2024-01-15T15:28:09.694+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-clamp-runtime-acm | [2024-01-15T15:28:23.741+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-1, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Subscribed to topic(s): policy-acruntime-participant policy-db-migrator | policy-clamp-ac-http-ppnt | reconnect.backoff.ms = 50 kafka | zookeeper.max.in.flight.requests = 10 policy-clamp-ac-sim-ppnt | client.id = consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2 simulator | 2024-01-15 15:28:22,122 INFO --> HTTP/1.1 200 OK policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cbd2f33b-08da-4df5-9be7-9783ed68c1a9, consumerInstance=policy-clamp-ac-pf-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@9301672 policy-pap | [2024-01-15T15:28:09.694+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-clamp-runtime-acm | [2024-01-15T15:28:23.982+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | policy-clamp-ac-http-ppnt | request.timeout.ms = 30000 kafka | zookeeper.metadata.migration.enable = false policy-clamp-ac-sim-ppnt | client.rack = simulator | 2024-01-15 15:28:22,123 INFO 172.17.0.2 - - [15/Jan/2024:15:28:07 +0000] "GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cbd2f33b-08da-4df5-9be7-9783ed68c1a9, consumerInstance=policy-clamp-ac-pf-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.835+00:00|INFO|ServiceManager|main] service manager started policy-clamp-runtime-acm | [2024-01-15T15:28:24.272+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@598657cd, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@456aa471, org.springframework.security.web.context.SecurityContextHolderFilter@645dc557, org.springframework.security.web.header.HeaderWriterFilter@57c6feea, org.springframework.security.web.authentication.logout.LogoutFilter@4efb13f1, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@14be750c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@17884d, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@c732e1c, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@69e2fe3b, org.springframework.security.web.access.ExceptionTranslationFilter@3a3ad8e7, org.springframework.security.web.access.intercept.AuthorizationFilter@bd4ee01] policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-clamp-ac-http-ppnt | retries = 2147483647 kafka | zookeeper.session.timeout.ms = 18000 policy-clamp-ac-sim-ppnt | connections.max.idle.ms = 540000 simulator | 2024-01-15 15:28:22,123 INFO UEB GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.865+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-clamp-ac-pf-ppnt | [] policy-clamp-runtime-acm | [2024-01-15T15:28:24.275+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.clamp.acm.runtime.config.MetricsConfiguration policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | retry.backoff.ms = 100 kafka | zookeeper.set.acl = false policy-clamp-ac-sim-ppnt | default.api.timeout.ms = 60000 simulator | 2024-01-15 15:28:22,124 INFO --> HTTP/1.1 200 OK policy-pap | [2024-01-15T15:28:09.694+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332489692 policy-pap | [2024-01-15T15:28:09.696+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-1, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Subscribed to topic(s): policy-pdp-pap policy-clamp-runtime-acm | [2024-01-15T15:28:25.484+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | sasl.client.callback.handler.class = null kafka | zookeeper.ssl.cipher.suites = null policy-clamp-ac-sim-ppnt | enable.auto.commit = true simulator | 2024-01-15 15:28:22,124 WARN GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator will send credentials over a clear channel. policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:05.867+00:00|INFO|network|main] [OUT|KAFKA|policy-acruntime-participant] policy-pap | [2024-01-15T15:28:09.697+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-runtime-acm | [2024-01-15T15:28:25.562+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.jaas.config = null kafka | zookeeper.ssl.client.enable = false policy-clamp-ac-sim-ppnt | exclude.internal.topics = true simulator | 2024-01-15 15:28:22,124 INFO GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator (as some-key) ... policy-pap | allow.auto.create.topics = true policy-clamp-runtime-acm | [2024-01-15T15:28:25.590+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/onap/policy/clamp/acm' policy-clamp-ac-pf-ppnt | {"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"c20e92a6-a0cb-4d1c-b7de-8adb09ada244","timestamp":"2024-01-15T15:28:05.836283997Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | zookeeper.ssl.crl.enable = false policy-clamp-ac-sim-ppnt | fetch.max.bytes = 52428800 simulator | 2024-01-15 15:28:22,124 INFO UEB GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator policy-pap | auto.commit.interval.ms = 5000 policy-clamp-runtime-acm | [2024-01-15T15:28:25.621+00:00|INFO|ServiceManager|main] service manager starting policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.476+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.kerberos.min.time.before.relogin = 60000 kafka | zookeeper.ssl.enabled.protocols = null policy-clamp-ac-sim-ppnt | fetch.max.wait.ms = 500 simulator | 2024-01-15 15:28:22,126 WARN GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator will send credentials over a clear channel. policy-pap | auto.include.jmx.reporter = true policy-clamp-runtime-acm | [2024-01-15T15:28:25.621+00:00|INFO|ServiceManager|main] service manager starting Topic endpoint management policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.477+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 3 with epoch 0 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-clamp-ac-http-ppnt | sasl.kerberos.service.name = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS policy-clamp-ac-sim-ppnt | fetch.min.bytes = 1 simulator | 2024-01-15 15:28:22,126 INFO GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator (as some-key) ... policy-pap | auto.offset.reset = latest policy-clamp-runtime-acm | [2024-01-15T15:28:25.627+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53971418-7c64-4b4a-8b2e-3deb55882781, consumerInstance=policy-clamp-runtime-acm, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.476+00:00|INFO|Metadata|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | zookeeper.ssl.keystore.location = null policy-clamp-ac-sim-ppnt | group.id = 845348fa-d712-41dc-bc31-ba3c79964bd7 simulator | 2024-01-15 15:28:37,130 INFO 172.17.0.2 - - [15/Jan/2024:15:28:22 +0000] "GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-pap | bootstrap.servers = [kafka:9092] policy-clamp-runtime-acm | [2024-01-15T15:28:25.640+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.479+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 simulator | 2024-01-15 15:28:37,130 INFO --> HTTP/1.1 200 OK policy-clamp-ac-sim-ppnt | group.instance.id = null policy-clamp-runtime-acm | allow.auto.create.topics = true policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.490+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] (Re-)joining group policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | sasl.login.callback.handler.class = null policy-pap | check.crcs = true simulator | 2024-01-15 15:28:37,131 INFO UEB GET /events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator policy-clamp-ac-sim-ppnt | heartbeat.interval.ms = 3000 kafka | zookeeper.ssl.keystore.password = null policy-clamp-runtime-acm | auto.commit.interval.ms = 5000 policy-clamp-runtime-acm | auto.include.jmx.reporter = true policy-db-migrator | policy-clamp-ac-http-ppnt | sasl.login.class = null policy-pap | client.dns.lookup = use_all_dns_ips simulator | 2024-01-15 15:28:37,132 WARN GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator will send credentials over a clear channel. policy-clamp-ac-sim-ppnt | interceptor.classes = [] policy-clamp-ac-sim-ppnt | internal.leave.group.on.close = true policy-clamp-runtime-acm | auto.offset.reset = latest policy-db-migrator | policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.525+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Request joining group due to: need to re-join with the given member-id: consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2-e8065291-3c92-4be5-abaf-df92508c2764 policy-clamp-ac-http-ppnt | sasl.login.connect.timeout.ms = null policy-pap | client.id = consumer-policy-pap-2 simulator | 2024-01-15 15:28:37,132 INFO --> HTTP/1.1 200 OK kafka | zookeeper.ssl.keystore.type = null policy-clamp-ac-sim-ppnt | internal.throw.on.fetch.stable.offset.unsupported = false policy-clamp-runtime-acm | bootstrap.servers = [kafka:9092] policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-clamp-ac-http-ppnt | sasl.login.read.timeout.ms = null policy-pap | client.rack = simulator | 2024-01-15 15:28:37,132 INFO 172.17.0.2 - - [15/Jan/2024:15:28:22 +0000] "GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" kafka | zookeeper.ssl.ocsp.enable = false policy-clamp-ac-sim-ppnt | isolation.level = read_uncommitted policy-clamp-runtime-acm | check.crcs = true policy-db-migrator | -------------- policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] (Re-)joining group policy-clamp-ac-http-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-clamp-ac-http-ppnt | sasl.login.refresh.min.period.seconds = 60 simulator | 2024-01-15 15:28:37,132 INFO GET http://simulator:3904/events/appc-lcm-read/bc73ad39-2e41-4278-8095-9ebf50e479e5/simulator (as some-key) ... kafka | zookeeper.ssl.protocol = TLSv1.2 policy-clamp-ac-sim-ppnt | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-runtime-acm | client.dns.lookup = use_all_dns_ips policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.550+00:00|INFO|ParticipantMessagePublisher|main] Sent Participant Register message to CLAMP - ParticipantRegister(super=ParticipantMessage(messageType=PARTICIPANT_REGISTER, messageId=c20e92a6-a0cb-4d1c-b7de-8adb09ada244, timestamp=2024-01-15T15:28:05.836283997Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c03, automationCompositionId=null, compositionId=null), participantSupportedElementType=[ParticipantSupportedElementType(id=ca3110f9-47ca-4135-be0d-db3862b51b45, typeName=org.onap.policy.clamp.acm.PolicyAutomationCompositionElement, typeVersion=1.0.0)]) policy-clamp-ac-http-ppnt | sasl.login.refresh.window.factor = 0.8 policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true kafka | zookeeper.ssl.truststore.location = null policy-clamp-ac-sim-ppnt | max.partition.fetch.bytes = 1048576 policy-clamp-runtime-acm | client.id = consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2 policy-clamp-runtime-acm | client.rack = policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:06.555+00:00|INFO|PolicyParticipantApplication|main] Started PolicyParticipantApplication in 9.262 seconds (process running for 10.071) policy-clamp-ac-http-ppnt | sasl.login.refresh.window.jitter = 0.05 policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:09.534+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Successfully joined group with generation Generation{generationId=1, memberId='consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2-e8065291-3c92-4be5-abaf-df92508c2764', protocol='range'} policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.max.ms = 10000 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 kafka | [2024-01-15 15:27:25,832] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-15 15:27:25,832] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:09.543+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Finished assignment for group at generation 1: {consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2-e8065291-3c92-4be5-abaf-df92508c2764=Assignment(partitions=[policy-acruntime-participant-0])} policy-clamp-ac-http-ppnt | sasl.login.retry.backoff.ms = 100 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-clamp-runtime-acm | connections.max.idle.ms = 540000 kafka | [2024-01-15 15:27:25,834] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:09.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Successfully synced group in generation Generation{generationId=1, memberId='consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2-e8065291-3c92-4be5-abaf-df92508c2764', protocol='range'} policy-clamp-ac-http-ppnt | sasl.mechanism = GSSAPI simulator | 2024-01-15 15:28:37,133 INFO UEB GET /events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator policy-pap | heartbeat.interval.ms = 3000 policy-clamp-runtime-acm | default.api.timeout.ms = 60000 kafka | [2024-01-15 15:27:25,836] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | -------------- policy-db-migrator | policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:09.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Notifying assignor about the new Assignment(partitions=[policy-acruntime-participant-0]) policy-clamp-ac-http-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 simulator | 2024-01-15 15:28:37,134 WARN GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator will send credentials over a clear channel. policy-pap | interceptor.classes = [] policy-clamp-runtime-acm | enable.auto.commit = true kafka | [2024-01-15 15:27:25,893] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:09.555+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Adding newly assigned partitions: policy-acruntime-participant-0 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.audience = null policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:09.563+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Found no committed offset for partition policy-acruntime-participant-0 policy-clamp-ac-http-ppnt | sasl.oauthbearer.expected.issuer = null policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-clamp-ac-sim-ppnt | max.poll.interval.ms = 300000 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:09.576+00:00|INFO|SubscriptionState|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2, groupId=cbd2f33b-08da-4df5-9be7-9783ed68c1a9] Resetting offset for partition policy-acruntime-participant-0 to position FetchPosition{offset=4, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | isolation.level = read_uncommitted policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:30.928+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-ac-http-ppnt | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-ac-http-ppnt | sasl.oauthbearer.scope.claim.name = scope policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-http-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-clamp-ac-http-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-clamp-ac-pf-ppnt | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"552fa693-0f35-4f3d-bcba-48cfac49cb30","timestamp":"2024-01-15T15:28:30.853208592Z"} policy-pap | max.partition.fetch.bytes = 1048576 policy-clamp-ac-http-ppnt | security.protocol = PLAINTEXT policy-clamp-ac-http-ppnt | security.providers = null policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:30.991+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] policy-pap | max.poll.interval.ms = 300000 policy-clamp-ac-http-ppnt | send.buffer.bytes = 131072 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"aa175f6c-327a-44d3-a931-143c53626cc3","timestamp":"2024-01-15T15:28:30.960765419Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} policy-pap | max.poll.records = 500 policy-clamp-ac-http-ppnt | socket.connection.setup.timeout.ms = 10000 policy-clamp-ac-http-ppnt | ssl.cipher.suites = null policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.000+00:00|INFO|ParticipantMessagePublisher|KAFKA-source-policy-acruntime-participant] Sent Participant Status message to CLAMP - ParticipantStatus(super=ParticipantMessage(messageType=PARTICIPANT_STATUS, messageId=aa175f6c-327a-44d3-a931-143c53626cc3, timestamp=2024-01-15T15:28:30.960765419Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c03, automationCompositionId=null, compositionId=null), state=ON_LINE, participantDefinitionUpdates=[], automationCompositionInfoList=[], participantSupportedElementType=[ParticipantSupportedElementType(id=ca3110f9-47ca-4135-be0d-db3862b51b45, typeName=org.onap.policy.clamp.acm.PolicyAutomationCompositionElement, typeVersion=1.0.0)]) policy-clamp-runtime-acm | exclude.internal.topics = true policy-clamp-runtime-acm | fetch.max.bytes = 52428800 policy-clamp-runtime-acm | fetch.max.wait.ms = 500 policy-clamp-runtime-acm | fetch.min.bytes = 1 policy-pap | metadata.max.age.ms = 300000 policy-clamp-ac-http-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-ac-http-ppnt | ssl.endpoint.identification.algorithm = https policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.006+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | group.id = 53971418-7c64-4b4a-8b2e-3deb55882781 policy-clamp-runtime-acm | group.instance.id = null simulator | 2024-01-15 15:28:37,134 INFO GET http://simulator:3904/events/appc-cl/22829b8b-7eda-4280-8fbb-124377a1495c/simulator (as some-key) ... policy-db-migrator | -------------- policy-pap | metric.reporters = [] policy-clamp-ac-http-ppnt | ssl.engine.factory.class = null policy-clamp-ac-http-ppnt | ssl.key.password = null policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"d19a6d28-a7d0-4fb8-a2bf-addcffc2e329","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"c5714361-0311-4541-aeaa-881ea2ed50d9","timestamp":"2024-01-15T15:28:30.909550659Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} policy-clamp-runtime-acm | heartbeat.interval.ms = 3000 policy-clamp-runtime-acm | interceptor.classes = [] policy-db-migrator | policy-pap | metrics.num.samples = 2 policy-clamp-ac-http-ppnt | ssl.keymanager.algorithm = SunX509 policy-clamp-ac-http-ppnt | ssl.keystore.certificate.chain = null policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.006+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS policy-clamp-ac-sim-ppnt | max.poll.records = 500 policy-clamp-runtime-acm | internal.leave.group.on.close = true policy-clamp-runtime-acm | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | metrics.recording.level = INFO kafka | [2024-01-15 15:27:25,899] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-clamp-ac-http-ppnt | ssl.keystore.key = null policy-clamp-ac-sim-ppnt | metadata.max.age.ms = 300000 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.006+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | isolation.level = read_uncommitted policy-clamp-runtime-acm | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-01-15 15:27:25,915] INFO Loaded 0 logs in 21ms (kafka.log.LogManager) policy-clamp-ac-http-ppnt | ssl.keystore.location = null policy-clamp-ac-sim-ppnt | metric.reporters = [] policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"8ccc2300-50ed-4075-a45b-c429651e9a40","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"d215a98e-7258-4498-8b3e-98d0866bfe7e","timestamp":"2024-01-15T15:28:30.907805883Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} policy-clamp-runtime-acm | max.partition.fetch.bytes = 1048576 policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-01-15 15:27:25,918] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-clamp-ac-http-ppnt | ssl.keystore.password = null policy-clamp-ac-sim-ppnt | metrics.num.samples = 2 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.006+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS policy-clamp-runtime-acm | max.poll.interval.ms = 300000 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-pap | receive.buffer.bytes = 65536 kafka | [2024-01-15 15:27:25,919] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-clamp-ac-http-ppnt | ssl.keystore.type = JKS policy-clamp-ac-sim-ppnt | metrics.recording.level = INFO policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.012+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | max.poll.records = 500 policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-01-15 15:27:25,929] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-clamp-ac-http-ppnt | ssl.protocol = TLSv1.3 policy-clamp-ac-sim-ppnt | metrics.sample.window.ms = 30000 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"7f97fcb8-2a7c-4f99-b027-f3849613ccbf","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"ccd5cbb5-86b9-45b0-aa86-b9901ef300a9","timestamp":"2024-01-15T15:28:30.922233847Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} policy-clamp-runtime-acm | metadata.max.age.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | reconnect.backoff.ms = 50 kafka | [2024-01-15 15:27:25,976] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-clamp-ac-http-ppnt | ssl.provider = null policy-clamp-ac-sim-ppnt | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.012+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS policy-clamp-runtime-acm | metric.reporters = [] policy-db-migrator | -------------- policy-pap | request.timeout.ms = 30000 kafka | [2024-01-15 15:27:25,993] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-clamp-ac-http-ppnt | ssl.secure.random.implementation = null policy-clamp-ac-sim-ppnt | receive.buffer.bytes = 65536 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.013+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | metrics.num.samples = 2 policy-db-migrator | policy-pap | retry.backoff.ms = 100 kafka | [2024-01-15 15:27:26,009] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-clamp-ac-http-ppnt | ssl.trustmanager.algorithm = PKIX policy-clamp-ac-sim-ppnt | reconnect.backoff.max.ms = 1000 policy-clamp-ac-pf-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"aa175f6c-327a-44d3-a931-143c53626cc3","timestamp":"2024-01-15T15:28:30.960765419Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} policy-clamp-runtime-acm | metrics.recording.level = INFO policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null kafka | [2024-01-15 15:27:26,038] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-clamp-ac-http-ppnt | ssl.truststore.certificates = null policy-clamp-ac-sim-ppnt | reconnect.backoff.ms = 50 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:31.013+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS policy-clamp-runtime-acm | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-pap | sasl.jaas.config = null kafka | [2024-01-15 15:27:26,505] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-clamp-ac-http-ppnt | ssl.truststore.location = null policy-clamp-ac-sim-ppnt | request.timeout.ms = 30000 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.298+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-01-15 15:27:26,538] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-clamp-ac-http-ppnt | ssl.truststore.password = null policy-clamp-ac-sim-ppnt | retry.backoff.ms = 100 policy-clamp-ac-pf-ppnt | {"messageType":"PARTICIPANT_PRIME","messageId":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","timestamp":"2024-01-15T15:28:45.269562160Z","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d"} policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-15 15:27:26,538] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-clamp-ac-http-ppnt | ssl.truststore.type = JKS policy-clamp-ac-sim-ppnt | sasl.client.callback.handler.class = null policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.307+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-acruntime-participant] policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-15 15:27:26,544] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-clamp-ac-http-ppnt | transaction.timeout.ms = 60000 policy-clamp-ac-sim-ppnt | sasl.jaas.config = null policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} policy-clamp-runtime-acm | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-15 15:27:26,549] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-clamp-ac-http-ppnt | transactional.id = null policy-clamp-ac-sim-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.313+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | receive.buffer.bytes = 65536 policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-15 15:27:26,571] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-clamp-ac-http-ppnt | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-clamp-ac-sim-ppnt | sasl.kerberos.min.time.before.relogin = 60000 policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} policy-clamp-runtime-acm | reconnect.backoff.max.ms = 1000 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-pap | sasl.login.callback.handler.class = null kafka | [2024-01-15 15:27:26,576] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-clamp-ac-http-ppnt | policy-clamp-ac-sim-ppnt | sasl.kerberos.service.name = null policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.313+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-runtime-acm | reconnect.backoff.ms = 50 policy-db-migrator | -------------- policy-pap | sasl.login.class = null kafka | [2024-01-15 15:27:26,578] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.513+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.319+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | request.timeout.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-01-15 15:27:26,581] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.626+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} policy-clamp-runtime-acm | retry.backoff.ms = 100 policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-15 15:27:26,595] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.626+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-clamp-ac-sim-ppnt | sasl.login.callback.handler.class = null policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.319+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-runtime-acm | sasl.client.callback.handler.class = null policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-15 15:27:26,623] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.626+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332464626 policy-clamp-ac-sim-ppnt | sasl.login.class = null policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.322+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | sasl.jaas.config = null policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql kafka | [2024-01-15 15:27:26,684] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705332446669,1705332446669,1,0,0,72057611093999617,258,0,27 policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.627+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=68d87dd1-f764-4706-9a83-bc8fb3f5fe68, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-clamp-ac-sim-ppnt | sasl.login.connect.timeout.ms = null policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} policy-clamp-runtime-acm | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | (kafka.zk.KafkaZkClient) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.668+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantMessagePublisher$$SpringCGLIB$$0 policy-clamp-ac-sim-ppnt | sasl.login.read.timeout.ms = null policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.323+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-runtime-acm | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-15 15:27:26,686] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.686+00:00|INFO|ServiceManager|main] service manager starting Listener AcPropertyUpdateListener policy-clamp-ac-sim-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.412+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-clamp-runtime-acm | sasl.kerberos.service.name = null kafka | [2024-01-15 15:27:26,768] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.687+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantPrimeListener policy-clamp-ac-sim-ppnt | sasl.login.refresh.min.period.seconds = 60 policy-clamp-ac-pf-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90","state":"ON_LINE"} policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-15 15:27:26,775] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.687+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionMigrationListener policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.factor = 0.8 policy-clamp-ac-pf-ppnt | [2024-01-15T15:28:45.412+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-pap | sasl.login.retry.backoff.ms = 100 policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-15 15:27:26,785] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.687+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRestartListener policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.jitter = 0.05 policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.mechanism = GSSAPI policy-clamp-runtime-acm | sasl.login.callback.handler.class = null kafka | [2024-01-15 15:27:26,786] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.687+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRegisterAckListener policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.ms = 100 policy-clamp-ac-sim-ppnt | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-runtime-acm | sasl.login.class = null kafka | [2024-01-15 15:27:26,794] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.688+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantStatusReqListener policy-clamp-ac-sim-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.audience = null policy-clamp-runtime-acm | sasl.login.connect.timeout.ms = null kafka | [2024-01-15 15:27:26,813] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.688+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionStateChangeListener policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.issuer = null policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.expected.issuer = null policy-clamp-runtime-acm | sasl.login.read.timeout.ms = null kafka | [2024-01-15 15:27:26,813] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.688+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionDeployListener policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-runtime-acm | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-15 15:27:26,817] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.688+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantDeregisterAckListener policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-ac-sim-ppnt | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-runtime-acm | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-15 15:27:26,820] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.688+00:00|INFO|ServiceManager|main] service manager starting Topic Message Dispatcher policy-clamp-ac-sim-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-clamp-ac-sim-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-runtime-acm | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-15 15:27:26,821] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.689+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=75c1079f-283a-4d21-9ddf-3c97158a5ec8, consumerInstance=policy-clamp-ac-http-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@506aabf6 policy-clamp-ac-sim-ppnt | security.protocol = PLAINTEXT policy-clamp-ac-sim-ppnt | security.providers = null policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-runtime-acm | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-15 15:27:26,845] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.689+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=75c1079f-283a-4d21-9ddf-3c97158a5ec8, consumerInstance=policy-clamp-ac-http-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-clamp-ac-sim-ppnt | send.buffer.bytes = 131072 policy-clamp-ac-sim-ppnt | session.timeout.ms = 45000 policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-clamp-runtime-acm | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-15 15:27:26,849] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.689+00:00|INFO|ServiceManager|main] service manager started policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.ms = 10000 policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-clamp-runtime-acm | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-15 15:27:26,849] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.741+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-clamp-ac-sim-ppnt | ssl.cipher.suites = null policy-clamp-ac-sim-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-clamp-runtime-acm | sasl.mechanism = GSSAPI kafka | [2024-01-15 15:27:26,863] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-clamp-ac-http-ppnt | [] policy-clamp-ac-sim-ppnt | ssl.endpoint.identification.algorithm = https policy-clamp-ac-sim-ppnt | ssl.engine.factory.class = null policy-pap | security.protocol = PLAINTEXT policy-clamp-runtime-acm | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-15 15:27:26,864] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:44.743+00:00|INFO|network|main] [OUT|KAFKA|policy-acruntime-participant] policy-clamp-ac-sim-ppnt | ssl.key.password = null policy-clamp-ac-sim-ppnt | ssl.keymanager.algorithm = SunX509 policy-pap | security.providers = null policy-clamp-runtime-acm | sasl.oauthbearer.expected.audience = null kafka | [2024-01-15 15:27:26,869] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-clamp-ac-sim-ppnt | ssl.keystore.certificate.chain = null policy-clamp-ac-sim-ppnt | ssl.keystore.key = null policy-clamp-ac-http-ppnt | {"participantSupportedElementType":[{"id":"8ccc2300-50ed-4075-a45b-c429651e9a40","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"ba34238f-29ff-4ab8-ad2b-ce7105d71e60","timestamp":"2024-01-15T15:27:44.690201985Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} policy-pap | send.buffer.bytes = 131072 policy-clamp-runtime-acm | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-15 15:27:26,875] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-clamp-ac-sim-ppnt | ssl.keystore.location = null policy-clamp-ac-sim-ppnt | ssl.keystore.password = null policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.505+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 2 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | session.timeout.ms = 45000 policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-01-15 15:27:26,881] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-clamp-ac-sim-ppnt | ssl.keystore.type = JKS policy-clamp-ac-sim-ppnt | ssl.protocol = TLSv1.3 policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.508+00:00|INFO|Metadata|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.558+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:26,888] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.558+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.632+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 4 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-sim-ppnt | ssl.provider = null policy-clamp-ac-sim-ppnt | ssl.secure.random.implementation = null policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.705+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 4 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.737+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 6 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:26,912] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-sim-ppnt | ssl.trustmanager.algorithm = PKIX policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.841+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 8 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.903+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 5 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:26,918] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:45.947+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 10 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.017+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 6 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:26,925] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.052+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 12 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.086+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 kafka | [2024-01-15 15:27:26,932] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-clamp-ac-sim-ppnt | ssl.truststore.certificates = null policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.122+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 7 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:26,940] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-clamp-ac-sim-ppnt | ssl.truststore.location = null policy-clamp-ac-sim-ppnt | ssl.truststore.password = null policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.173+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 14 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-sim-ppnt | ssl.truststore.type = JKS policy-db-migrator | kafka | [2024-01-15 15:27:26,941] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.238+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 8 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | [2024-01-15 15:27:26,942] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | ssl.cipher.suites = null policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.283+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 16 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-01-15 15:27:26,942] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.341+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 9 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- policy-db-migrator | kafka | [2024-01-15 15:27:26,942] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-pap | ssl.endpoint.identification.algorithm = https policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.397+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 18 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-ac-sim-ppnt | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-15 15:27:26,944] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-pap | ssl.engine.factory.class = null policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.449+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-sim-ppnt | kafka | [2024-01-15 15:27:26,945] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-pap | ssl.key.password = null policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.503+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 20 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.385+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-15 15:27:26,945] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.552+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 11 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.385+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-15 15:27:26,946] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.623+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 22 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.385+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332464385 kafka | [2024-01-15 15:27:26,946] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-pap | ssl.keystore.key = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.655+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 12 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.385+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Subscribed to topic(s): policy-acruntime-participant kafka | [2024-01-15 15:27:26,947] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-pap | ssl.keystore.location = null policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.735+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 24 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.386+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7c97b3bb-d233-4bfc-9f74-229c3af87960, alive=false, publisher=null]]: starting kafka | [2024-01-15 15:27:26,953] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.773+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 13 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.454+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-01-15 15:27:26,954] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-pap | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.838+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 26 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-clamp-ac-sim-ppnt | acks = -1 kafka | [2024-01-15 15:27:26,958] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.875+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 14 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-clamp-runtime-acm | sasl.oauthbearer.scope.claim.name = scope policy-clamp-ac-sim-ppnt | auto.include.jmx.reporter = true kafka | [2024-01-15 15:27:26,963] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.provider = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.947+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Error while fetching metadata with correlation id 28 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-clamp-runtime-acm | sasl.oauthbearer.sub.claim.name = sub policy-clamp-ac-sim-ppnt | batch.size = 16384 kafka | [2024-01-15 15:27:26,966] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.955+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-clamp-runtime-acm | sasl.oauthbearer.token.endpoint.url = null policy-clamp-ac-sim-ppnt | bootstrap.servers = [kafka:9092] kafka | [2024-01-15 15:27:26,972] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:46.961+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] (Re-)joining group policy-clamp-runtime-acm | security.protocol = PLAINTEXT policy-clamp-ac-sim-ppnt | buffer.memory = 33554432 kafka | [2024-01-15 15:27:26,974] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.truststore.certificates = null policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:47.029+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Request joining group due to: need to re-join with the given member-id: consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2-8bda53c2-b517-48c5-903d-6393b9c731ee policy-clamp-runtime-acm | security.providers = null policy-clamp-ac-sim-ppnt | client.dns.lookup = use_all_dns_ips kafka | [2024-01-15 15:27:26,974] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | ssl.truststore.location = null policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-clamp-ac-http-ppnt | [2024-01-15T15:27:47.030+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-clamp-runtime-acm | send.buffer.bytes = 131072 policy-clamp-ac-sim-ppnt | client.id = producer-1 kafka | [2024-01-15 15:27:26,974] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:47.030+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] (Re-)joining group policy-clamp-runtime-acm | session.timeout.ms = 45000 policy-clamp-ac-sim-ppnt | compression.type = none kafka | [2024-01-15 15:27:26,975] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) policy-pap | ssl.truststore.type = JKS policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:47.092+00:00|INFO|ParticipantMessagePublisher|main] Sent Participant Register message to CLAMP - ParticipantRegister(super=ParticipantMessage(messageType=PARTICIPANT_REGISTER, messageId=ba34238f-29ff-4ab8-ad2b-ce7105d71e60, timestamp=2024-01-15T15:27:44.690201985Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c01, automationCompositionId=null, compositionId=null), participantSupportedElementType=[ParticipantSupportedElementType(id=8ccc2300-50ed-4075-a45b-c429651e9a40, typeName=org.onap.policy.clamp.acm.HttpAutomationCompositionElement, typeVersion=1.0.0)]) policy-clamp-runtime-acm | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-ac-sim-ppnt | connections.max.idle.ms = 540000 kafka | [2024-01-15 15:27:26,975] INFO Kafka startTimeMs: 1705332446967 (org.apache.kafka.common.utils.AppInfoParser) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:47.096+00:00|INFO|Application|main] Started Application in 18.529 seconds (process running for 20.035) policy-clamp-runtime-acm | socket.connection.setup.timeout.ms = 10000 policy-clamp-ac-sim-ppnt | delivery.timeout.ms = 120000 kafka | [2024-01-15 15:27:26,975] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:50.068+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Successfully joined group with generation Generation{generationId=1, memberId='consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2-8bda53c2-b517-48c5-903d-6393b9c731ee', protocol='range'} policy-clamp-runtime-acm | ssl.cipher.suites = null policy-clamp-ac-sim-ppnt | enable.idempotence = true kafka | [2024-01-15 15:27:26,977] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-pap | [2024-01-15T15:28:09.703+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | policy-clamp-ac-http-ppnt | [2024-01-15T15:27:50.088+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Finished assignment for group at generation 1: {consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2-8bda53c2-b517-48c5-903d-6393b9c731ee=Assignment(partitions=[policy-acruntime-participant-0])} policy-clamp-runtime-acm | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-ac-sim-ppnt | interceptor.classes = [] kafka | [2024-01-15 15:27:26,978] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-pap | [2024-01-15T15:28:09.703+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-clamp-ac-http-ppnt | [2024-01-15T15:27:50.131+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Successfully synced group in generation Generation{generationId=1, memberId='consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2-8bda53c2-b517-48c5-903d-6393b9c731ee', protocol='range'} policy-clamp-runtime-acm | ssl.endpoint.identification.algorithm = https policy-clamp-ac-sim-ppnt | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-15 15:27:26,980] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-pap | [2024-01-15T15:28:09.703+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332489703 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:27:50.132+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Notifying assignor about the new Assignment(partitions=[policy-acruntime-participant-0]) policy-clamp-runtime-acm | ssl.engine.factory.class = null policy-clamp-ac-sim-ppnt | linger.ms = 0 kafka | [2024-01-15 15:27:26,981] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-pap | [2024-01-15T15:28:09.704+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-clamp-ac-http-ppnt | [2024-01-15T15:27:50.135+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Adding newly assigned partitions: policy-acruntime-participant-0 policy-clamp-runtime-acm | ssl.key.password = null policy-clamp-ac-sim-ppnt | max.block.ms = 60000 kafka | [2024-01-15 15:27:26,992] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:10.111+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null), PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null), PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-01-15T15:28:10.284+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-clamp-runtime-acm | ssl.keymanager.algorithm = SunX509 policy-clamp-ac-sim-ppnt | max.in.flight.requests.per.connection = 5 kafka | [2024-01-15 15:27:26,993] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | [2024-01-15T15:28:10.525+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@50caeb4b, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6fafbdac, org.springframework.security.web.context.SecurityContextHolderFilter@182fd26b, org.springframework.security.web.header.HeaderWriterFilter@1c3b221f, org.springframework.security.web.authentication.logout.LogoutFilter@374ba492, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@319058ce, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@49c4118b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@182dcd2b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@c7c07ff, org.springframework.security.web.access.ExceptionTranslationFilter@27d6467, org.springframework.security.web.access.intercept.AuthorizationFilter@2f498f21] policy-pap | [2024-01-15T15:28:11.428+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-clamp-runtime-acm | ssl.keystore.certificate.chain = null policy-clamp-ac-sim-ppnt | max.request.size = 1048576 kafka | [2024-01-15 15:27:26,994] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.500+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-01-15T15:28:11.518+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-clamp-runtime-acm | ssl.keystore.key = null policy-clamp-ac-sim-ppnt | metadata.max.age.ms = 300000 kafka | [2024-01-15 15:27:26,994] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-pap | [2024-01-15T15:28:11.540+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-01-15T15:28:11.540+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-clamp-runtime-acm | ssl.keystore.location = null policy-clamp-ac-sim-ppnt | metadata.max.idle.ms = 300000 kafka | [2024-01-15 15:27:27,000] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.541+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-01-15T15:28:11.541+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-clamp-runtime-acm | ssl.keystore.password = null policy-clamp-ac-sim-ppnt | metric.reporters = [] kafka | [2024-01-15 15:27:27,148] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | [2024-01-15T15:28:11.541+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-01-15T15:28:11.542+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-clamp-runtime-acm | ssl.keystore.type = JKS policy-clamp-ac-sim-ppnt | metrics.num.samples = 2 kafka | [2024-01-15 15:27:27,159] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.542+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-01-15T15:28:11.547+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=aa559ce3-1840-4027-b443-4b66dabb9280, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@17ebbf1e policy-clamp-runtime-acm | ssl.protocol = TLSv1.3 policy-clamp-ac-sim-ppnt | metrics.recording.level = INFO kafka | [2024-01-15 15:27:27,161] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.558+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=aa559ce3-1840-4027-b443-4b66dabb9280, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-15T15:28:11.558+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-runtime-acm | ssl.provider = null policy-clamp-ac-sim-ppnt | metrics.sample.window.ms = 30000 kafka | [2024-01-15 15:27:27,167] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-clamp-runtime-acm | ssl.secure.random.implementation = null policy-clamp-ac-sim-ppnt | partitioner.adaptive.partitioning.enable = true kafka | [2024-01-15 15:27:32,150] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-clamp-runtime-acm | ssl.trustmanager.algorithm = PKIX policy-clamp-ac-sim-ppnt | partitioner.availability.timeout.ms = 0 kafka | [2024-01-15 15:27:32,151] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-clamp-runtime-acm | ssl.truststore.certificates = null policy-clamp-ac-sim-ppnt | partitioner.class = null kafka | [2024-01-15 15:27:45,486] INFO Creating topic policy-acruntime-participant with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | client.dns.lookup = use_all_dns_ips policy-clamp-ac-http-ppnt | [2024-01-15T15:27:50.170+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Found no committed offset for partition policy-acruntime-participant-0 policy-clamp-runtime-acm | ssl.truststore.location = null policy-clamp-ac-sim-ppnt | partitioner.ignore.keys = false kafka | [2024-01-15 15:27:45,489] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | -------------- policy-pap | client.id = consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3 policy-clamp-ac-http-ppnt | [2024-01-15T15:27:50.201+00:00|INFO|SubscriptionState|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2, groupId=75c1079f-283a-4d21-9ddf-3c97158a5ec8] Resetting offset for partition policy-acruntime-participant-0 to position FetchPosition{offset=3, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-clamp-runtime-acm | ssl.truststore.password = null policy-clamp-ac-sim-ppnt | receive.buffer.bytes = 32768 kafka | [2024-01-15 15:27:45,544] INFO [Controller id=1] New topics: [Set(policy-acruntime-participant, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-acruntime-participant,Some(5o5LE87SQnOpV49rYbYX3g),Map(policy-acruntime-participant-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(lbqh0qd0Q4eTI3B1U7lVXA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | policy-pap | client.rack = policy-clamp-ac-http-ppnt | [2024-01-15T15:28:06.541+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | ssl.truststore.type = JKS policy-clamp-ac-sim-ppnt | reconnect.backoff.max.ms = 1000 kafka | [2024-01-15 15:27:45,552] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,policy-acruntime-participant-0,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 policy-clamp-ac-http-ppnt | {"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"c20e92a6-a0cb-4d1c-b7de-8adb09ada244","timestamp":"2024-01-15T15:28:05.836283997Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} policy-clamp-runtime-acm | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-sim-ppnt | reconnect.backoff.ms = 50 policy-clamp-ac-sim-ppnt | request.timeout.ms = 30000 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-pap | default.api.timeout.ms = 60000 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:06.544+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_REGISTER policy-clamp-runtime-acm | policy-clamp-ac-sim-ppnt | retries = 2147483647 policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.903+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | [2024-01-15T15:28:25.646+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-pap | enable.auto.commit = true policy-clamp-ac-http-ppnt | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"552fa693-0f35-4f3d-bcba-48cfac49cb30","timestamp":"2024-01-15T15:28:30.853208592Z"} policy-clamp-runtime-acm | [2024-01-15T15:28:25.646+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | policy-db-migrator | policy-pap | exclude.internal.topics = true policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.917+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | [2024-01-15T15:28:25.647+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332505646 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-pap | fetch.max.bytes = 52428800 policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"8ccc2300-50ed-4075-a45b-c429651e9a40","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"d215a98e-7258-4498-8b3e-98d0866bfe7e","timestamp":"2024-01-15T15:28:30.907805883Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} policy-clamp-runtime-acm | [2024-01-15T15:28:25.647+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Subscribed to topic(s): policy-acruntime-participant policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-pap | fetch.max.wait.ms = 500 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.954+00:00|INFO|ParticipantMessagePublisher|KAFKA-source-policy-acruntime-participant] Sent Participant Status message to CLAMP - ParticipantStatus(super=ParticipantMessage(messageType=PARTICIPANT_STATUS, messageId=d215a98e-7258-4498-8b3e-98d0866bfe7e, timestamp=2024-01-15T15:28:30.907805883Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c01, automationCompositionId=null, compositionId=null), state=ON_LINE, participantDefinitionUpdates=[], automationCompositionInfoList=[], participantSupportedElementType=[ParticipantSupportedElementType(id=8ccc2300-50ed-4075-a45b-c429651e9a40, typeName=org.onap.policy.clamp.acm.HttpAutomationCompositionElement, typeVersion=1.0.0)]) policy-clamp-runtime-acm | [2024-01-15T15:28:25.647+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d1749759-e0ad-4fd6-9505-37538f59074d, alive=false, publisher=null]]: starting policy-db-migrator | policy-db-migrator | policy-pap | fetch.min.bytes = 1 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.963+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | [2024-01-15T15:28:25.662+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-pap | group.id = aa559ce3-1840-4027-b443-4b66dabb9280 kafka | [2024-01-15 15:27:45,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"d19a6d28-a7d0-4fb8-a2bf-addcffc2e329","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"c5714361-0311-4541-aeaa-881ea2ed50d9","timestamp":"2024-01-15T15:28:30.909550659Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} policy-clamp-runtime-acm | acks = -1 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-pap | group.instance.id = null kafka | [2024-01-15 15:27:45,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.964+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS policy-clamp-runtime-acm | auto.include.jmx.reporter = true policy-clamp-ac-sim-ppnt | retry.backoff.ms = 100 policy-db-migrator | policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-15 15:27:45,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.964+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | batch.size = 16384 policy-clamp-ac-sim-ppnt | sasl.client.callback.handler.class = null policy-db-migrator | policy-pap | interceptor.classes = [] kafka | [2024-01-15 15:27:45,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"8ccc2300-50ed-4075-a45b-c429651e9a40","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"d215a98e-7258-4498-8b3e-98d0866bfe7e","timestamp":"2024-01-15T15:28:30.907805883Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} policy-clamp-runtime-acm | bootstrap.servers = [kafka:9092] policy-clamp-ac-sim-ppnt | sasl.jaas.config = null policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-pap | internal.leave.group.on.close = true kafka | [2024-01-15 15:27:45,555] INFO [Controller id=1 epoch=1] Changed partition policy-acruntime-participant-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.964+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS policy-clamp-runtime-acm | buffer.memory = 33554432 policy-clamp-ac-sim-ppnt | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.988+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | client.dns.lookup = use_all_dns_ips policy-clamp-ac-sim-ppnt | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | isolation.level = read_uncommitted kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"7f97fcb8-2a7c-4f99-b027-f3849613ccbf","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"ccd5cbb5-86b9-45b0-aa86-b9901ef300a9","timestamp":"2024-01-15T15:28:30.922233847Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} policy-clamp-runtime-acm | client.id = producer-1 policy-clamp-ac-sim-ppnt | sasl.kerberos.service.name = null policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-runtime-acm | compression.type = none policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | policy-pap | max.partition.fetch.bytes = 1048576 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:30.988+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | connections.max.idle.ms = 540000 policy-clamp-ac-sim-ppnt | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:31.005+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | delivery.timeout.ms = 120000 policy-clamp-ac-sim-ppnt | sasl.login.callback.handler.class = null policy-db-migrator | > upgrade 0470-pdp.sql policy-pap | max.poll.records = 500 policy-clamp-ac-http-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"aa175f6c-327a-44d3-a931-143c53626cc3","timestamp":"2024-01-15T15:28:30.960765419Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | enable.idempotence = true policy-clamp-ac-sim-ppnt | sasl.login.class = null policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:31.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-sim-ppnt | sasl.login.connect.timeout.ms = null policy-clamp-runtime-acm | interceptor.classes = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | metric.reporters = [] policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.293+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-sim-ppnt | sasl.login.read.timeout.ms = null policy-clamp-runtime-acm | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 policy-clamp-ac-http-ppnt | {"messageType":"PARTICIPANT_PRIME","messageId":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","timestamp":"2024-01-15T15:28:45.269562160Z","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d"} kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-sim-ppnt | sasl.login.refresh.buffer.seconds = 300 policy-clamp-runtime-acm | linger.ms = 0 policy-db-migrator | policy-pap | metrics.recording.level = INFO policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.297+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-acruntime-participant] policy-clamp-ac-sim-ppnt | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | max.block.ms = 60000 policy-db-migrator | policy-pap | metrics.sample.window.ms = 30000 policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | max.in.flight.requests.per.connection = 5 policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.314+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-ac-sim-ppnt | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | max.request.size = 1048576 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | metadata.max.age.ms = 300000 policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-clamp-ac-sim-ppnt | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | metadata.max.idle.ms = 300000 policy-pap | reconnect.backoff.max.ms = 1000 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.314+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-db-migrator | -------------- policy-clamp-ac-sim-ppnt | sasl.mechanism = GSSAPI kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | metric.reporters = [] policy-pap | reconnect.backoff.ms = 50 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.318+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-db-migrator | policy-clamp-ac-sim-ppnt | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | metrics.num.samples = 2 policy-pap | request.timeout.ms = 30000 policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} policy-db-migrator | policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.audience = null kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | metrics.recording.level = INFO policy-pap | retry.backoff.ms = 100 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.319+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-clamp-ac-sim-ppnt | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | metrics.sample.window.ms = 30000 policy-pap | sasl.client.callback.handler.class = null policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.321+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-db-migrator | -------------- policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | partitioner.adaptive.partitioning.enable = true policy-pap | sasl.jaas.config = null policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | partitioner.availability.timeout.ms = 0 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.321+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | partitioner.class = null policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.412+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-ac-sim-ppnt | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | partitioner.ignore.keys = false policy-pap | sasl.kerberos.service.name = null policy-clamp-ac-http-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90","state":"ON_LINE"} policy-clamp-ac-sim-ppnt | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | policy-clamp-runtime-acm | receive.buffer.bytes = 32768 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-clamp-ac-http-ppnt | [2024-01-15T15:28:45.412+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-ac-sim-ppnt | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0500-pdpsubgroup.sql kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | reconnect.backoff.max.ms = 1000 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-clamp-ac-sim-ppnt | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | reconnect.backoff.ms = 50 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-runtime-acm | request.timeout.ms = 30000 policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:27:45,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-clamp-ac-sim-ppnt | security.protocol = PLAINTEXT policy-db-migrator | kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-clamp-ac-sim-ppnt | security.providers = null policy-db-migrator | kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-clamp-ac-sim-ppnt | send.buffer.bytes = 131072 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-runtime-acm | retries = 2147483647 policy-clamp-runtime-acm | retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-runtime-acm | sasl.client.callback.handler.class = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-clamp-runtime-acm | sasl.jaas.config = null policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-clamp-runtime-acm | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-clamp-runtime-acm | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | session.timeout.ms = 45000 policy-db-migrator | policy-clamp-runtime-acm | sasl.kerberos.service.name = null policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-pap | ssl.endpoint.identification.algorithm = https policy-clamp-ac-sim-ppnt | socket.connection.setup.timeout.ms = 10000 kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null policy-clamp-ac-sim-ppnt | ssl.cipher.suites = null kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | ssl.key.password = null policy-clamp-ac-sim-ppnt | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | sasl.login.callback.handler.class = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-clamp-ac-sim-ppnt | ssl.endpoint.identification.algorithm = https kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-clamp-runtime-acm | sasl.login.class = null policy-pap | ssl.keystore.certificate.chain = null policy-clamp-ac-sim-ppnt | ssl.engine.factory.class = null kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | sasl.login.connect.timeout.ms = null policy-pap | ssl.keystore.key = null policy-clamp-ac-sim-ppnt | ssl.key.password = null kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-clamp-runtime-acm | sasl.login.read.timeout.ms = null policy-pap | ssl.keystore.location = null policy-clamp-ac-sim-ppnt | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-15 15:27:45,557] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | sasl.login.refresh.buffer.seconds = 300 policy-pap | ssl.keystore.password = null policy-clamp-ac-sim-ppnt | ssl.keystore.certificate.chain = null kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | sasl.login.refresh.min.period.seconds = 60 policy-pap | ssl.keystore.type = JKS policy-clamp-ac-sim-ppnt | ssl.keystore.key = null policy-db-migrator | kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-runtime-acm | sasl.login.refresh.window.factor = 0.8 policy-pap | ssl.protocol = TLSv1.3 policy-clamp-ac-sim-ppnt | ssl.keystore.location = null policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-runtime-acm | sasl.login.refresh.window.jitter = 0.05 policy-clamp-ac-sim-ppnt | ssl.keystore.password = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-runtime-acm | sasl.login.retry.backoff.max.ms = 10000 policy-pap | ssl.provider = null policy-clamp-ac-sim-ppnt | ssl.keystore.type = JKS policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-runtime-acm | sasl.login.retry.backoff.ms = 100 policy-pap | ssl.secure.random.implementation = null policy-clamp-ac-sim-ppnt | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-runtime-acm | sasl.mechanism = GSSAPI policy-pap | ssl.trustmanager.algorithm = PKIX policy-clamp-ac-sim-ppnt | ssl.provider = null policy-db-migrator | kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-runtime-acm | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | ssl.truststore.certificates = null policy-clamp-ac-sim-ppnt | ssl.secure.random.implementation = null policy-db-migrator | kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-runtime-acm | sasl.oauthbearer.expected.audience = null policy-pap | ssl.truststore.location = null policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql kafka | [2024-01-15 15:27:45,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-runtime-acm | sasl.oauthbearer.expected.issuer = null policy-clamp-ac-sim-ppnt | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-ac-sim-ppnt | ssl.truststore.certificates = null policy-pap | ssl.truststore.password = null kafka | [2024-01-15 15:27:45,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-ac-sim-ppnt | ssl.truststore.location = null policy-pap | ssl.truststore.type = JKS kafka | [2024-01-15 15:27:45,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-ac-sim-ppnt | ssl.truststore.password = null policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-15 15:27:45,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-ac-sim-ppnt | ssl.truststore.type = JKS policy-clamp-ac-sim-ppnt | transaction.timeout.ms = 60000 kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | sasl.oauthbearer.scope.claim.name = scope policy-clamp-ac-sim-ppnt | transactional.id = null kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-clamp-runtime-acm | sasl.oauthbearer.sub.claim.name = sub policy-pap | policy-clamp-ac-sim-ppnt | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-acruntime-participant-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | sasl.oauthbearer.token.endpoint.url = null policy-pap | [2024-01-15T15:28:11.565+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-clamp-ac-sim-ppnt | kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-clamp-runtime-acm | security.protocol = PLAINTEXT policy-pap | [2024-01-15T15:28:11.565+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.531+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | security.providers = null policy-pap | [2024-01-15T15:28:11.565+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332491565 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.704+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | send.buffer.bytes = 131072 policy-pap | [2024-01-15T15:28:11.565+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Subscribed to topic(s): policy-pdp-pap policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.705+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | socket.connection.setup.timeout.max.ms = 30000 policy-pap | [2024-01-15T15:28:11.566+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.705+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332464704 kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-clamp-runtime-acm | socket.connection.setup.timeout.ms = 10000 policy-pap | [2024-01-15T15:28:11.566+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9f2f1987-1557-4311-a503-1d74b8bc37d4, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@59db8216 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.705+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7c97b3bb-d233-4bfc-9f74-229c3af87960, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | ssl.cipher.suites = null policy-pap | [2024-01-15T15:28:11.566+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9f2f1987-1557-4311-a503-1d74b8bc37d4, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.706+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantMessagePublisher$$SpringCGLIB$$0 kafka | [2024-01-15 15:27:45,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-clamp-runtime-acm | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | [2024-01-15T15:28:11.566+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.759+00:00|INFO|ServiceManager|main] service manager starting Listener AcPropertyUpdateListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | ssl.endpoint.identification.algorithm = https policy-pap | allow.auto.create.topics = true policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantPrimeListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | ssl.engine.factory.class = null policy-pap | auto.commit.interval.ms = 5000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionMigrationListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | ssl.key.password = null policy-pap | auto.include.jmx.reporter = true policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRestartListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-clamp-runtime-acm | ssl.keymanager.algorithm = SunX509 policy-pap | auto.offset.reset = latest policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRegisterAckListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | ssl.keystore.certificate.chain = null policy-pap | bootstrap.servers = [kafka:9092] policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantStatusReqListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-clamp-runtime-acm | ssl.keystore.key = null policy-pap | check.crcs = true policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionStateChangeListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | ssl.keystore.location = null policy-pap | client.dns.lookup = use_all_dns_ips policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionDeployListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | ssl.keystore.password = null policy-pap | client.id = consumer-policy-pap-4 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantDeregisterAckListener kafka | [2024-01-15 15:27:45,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | ssl.keystore.type = JKS policy-pap | client.rack = policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|ServiceManager|main] service manager starting Topic Message Dispatcher kafka | [2024-01-15 15:27:45,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-clamp-runtime-acm | ssl.protocol = TLSv1.3 policy-pap | connections.max.idle.ms = 540000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=845348fa-d712-41dc-bc31-ba3c79964bd7, consumerInstance=policy-clamp-ac-sim-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6f1b8544 kafka | [2024-01-15 15:27:45,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | ssl.provider = null policy-pap | default.api.timeout.ms = 60000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.809+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=845348fa-d712-41dc-bc31-ba3c79964bd7, consumerInstance=policy-clamp-ac-sim-ppnt, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted kafka | [2024-01-15 15:27:45,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-clamp-runtime-acm | ssl.secure.random.implementation = null policy-pap | enable.auto.commit = true policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.810+00:00|INFO|ServiceManager|main] service manager started kafka | [2024-01-15 15:27:45,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | ssl.trustmanager.algorithm = PKIX policy-pap | exclude.internal.topics = true policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.923+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: kafka | [2024-01-15 15:27:45,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | ssl.truststore.certificates = null policy-pap | fetch.max.bytes = 52428800 policy-clamp-ac-sim-ppnt | [] kafka | [2024-01-15 15:27:45,602] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | ssl.truststore.location = null policy-pap | fetch.max.wait.ms = 500 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:44.934+00:00|INFO|network|main] [OUT|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,602] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-clamp-runtime-acm | ssl.truststore.password = null policy-pap | fetch.min.bytes = 1 kafka | [2024-01-15 15:27:45,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-ac-sim-ppnt | {"participantSupportedElementType":[{"id":"d19a6d28-a7d0-4fb8-a2bf-addcffc2e329","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"a9a98982-03d2-40b6-95f7-d1c6304988cd","timestamp":"2024-01-15T15:27:44.811251262Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} policy-db-migrator | -------------- policy-clamp-runtime-acm | ssl.truststore.type = JKS policy-pap | group.id = policy-pap kafka | [2024-01-15 15:27:45,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:45.759+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 2 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-clamp-runtime-acm | transaction.timeout.ms = 60000 policy-pap | group.instance.id = null kafka | [2024-01-15 15:27:45,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:45.761+00:00|INFO|Metadata|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-db-migrator | -------------- policy-clamp-runtime-acm | transactional.id = null policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-15 15:27:45,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:45.777+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-clamp-runtime-acm | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | interceptor.classes = [] kafka | [2024-01-15 15:27:45,605] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:45.778+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-db-migrator | policy-clamp-runtime-acm | policy-pap | internal.leave.group.on.close = true policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:45.842+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 4 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:45,610] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-clamp-runtime-acm | [2024-01-15T15:28:25.673+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:45.886+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 4 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:45,610] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | [2024-01-15T15:28:25.691+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | isolation.level = read_uncommitted policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:45.946+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 6 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:45,610] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-clamp-runtime-acm | [2024-01-15T15:28:25.691+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.002+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 5 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,610] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | [2024-01-15T15:28:25.691+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332505691 policy-pap | max.partition.fetch.bytes = 1048576 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.079+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 kafka | [2024-01-15 15:27:45,610] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | [2024-01-15T15:28:25.691+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d1749759-e0ad-4fd6-9505-37538f59074d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | max.poll.interval.ms = 300000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.090+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 8 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,611] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | [2024-01-15T15:28:25.691+00:00|INFO|ServiceManager|main] service manager starting Publisher AutomationCompositionStateChangePublisher$$SpringCGLIB$$0 policy-pap | max.poll.records = 500 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.129+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 6 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,611] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-clamp-runtime-acm | [2024-01-15T15:28:25.694+00:00|INFO|ServiceManager|main] service manager starting Publisher AutomationCompositionDeployPublisher$$SpringCGLIB$$0 policy-pap | metadata.max.age.ms = 300000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.205+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 10 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,611] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | [2024-01-15T15:28:25.694+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantPrimePublisher$$SpringCGLIB$$0 policy-pap | metric.reporters = [] policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.237+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 7 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:45,611] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-clamp-runtime-acm | [2024-01-15T15:28:25.695+00:00|INFO|ServiceManager|main] service manager starting Publisher AutomationCompositionMigrationPublisher$$SpringCGLIB$$0 policy-pap | metrics.num.samples = 2 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.314+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 12 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | [2024-01-15T15:28:25.695+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantDeregisterAckPublisher$$SpringCGLIB$$0 policy-pap | metrics.recording.level = INFO policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.343+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 8 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | [2024-01-15T15:28:25.695+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantRegisterAckPublisher$$SpringCGLIB$$0 policy-pap | metrics.sample.window.ms = 30000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.425+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 14 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | [2024-01-15T15:28:25.695+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantStatusReqPublisher$$SpringCGLIB$$0 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.445+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 9 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:45,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-clamp-runtime-acm | [2024-01-15T15:28:25.695+00:00|INFO|ServiceManager|main] service manager starting Publisher AcElementPropertiesPublisher$$SpringCGLIB$$0 policy-pap | receive.buffer.bytes = 65536 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.537+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 16 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,939] INFO [Controller id=1 epoch=1] Changed partition policy-acruntime-participant-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | [2024-01-15T15:28:25.695+00:00|INFO|ServiceManager|main] service manager starting Publisher ParticipantRestartPublisher$$SpringCGLIB$$0 policy-pap | reconnect.backoff.max.ms = 1000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.552+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-clamp-runtime-acm | [2024-01-15T15:28:25.695+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantRegisterListener policy-pap | reconnect.backoff.ms = 50 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.644+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 18 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | [2024-01-15T15:28:25.696+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantStatusListener policy-pap | request.timeout.ms = 30000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.664+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 11 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | [2024-01-15T15:28:25.696+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionStateChangeAckListener policy-pap | retry.backoff.ms = 100 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.752+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 20 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | [2024-01-15T15:28:25.696+00:00|INFO|ServiceManager|main] service manager starting Listener AutomationCompositionUpdateAckListener policy-pap | sasl.client.callback.handler.class = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.770+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 12 : {policy-acruntime-participant=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:45,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-clamp-runtime-acm | [2024-01-15T15:28:25.696+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantDeregisterListener policy-pap | sasl.jaas.config = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.861+00:00|WARN|NetworkClient|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Error while fetching metadata with correlation id 22 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | [2024-01-15T15:28:25.696+00:00|INFO|ServiceManager|main] service manager starting Listener ParticipantPrimeAckListener policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.880+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 13 : {policy-acruntime-participant=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:45,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-clamp-runtime-acm | [2024-01-15T15:28:25.696+00:00|INFO|ServiceManager|main] service manager starting Topic Message Dispatcher policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.975+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-01-15 15:27:45,942] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-clamp-runtime-acm | [2024-01-15T15:28:25.697+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53971418-7c64-4b4a-8b2e-3deb55882781, consumerInstance=policy-clamp-runtime-acm, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3bcc8f13 policy-pap | sasl.kerberos.service.name = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:46.982+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] (Re-)joining group kafka | [2024-01-15 15:27:45,942] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-clamp-runtime-acm | [2024-01-15T15:28:25.697+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53971418-7c64-4b4a-8b2e-3deb55882781, consumerInstance=policy-clamp-runtime-acm, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-acruntime-participant,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-acruntime-participant, effectiveTopic=policy-acruntime-participant, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:47.029+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Request joining group due to: need to re-join with the given member-id: consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2-3b6606cc-038f-415a-9dc8-f836b93e7c8f kafka | [2024-01-15 15:27:45,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-clamp-runtime-acm | [2024-01-15T15:28:25.697+00:00|INFO|ServiceManager|main] service manager started policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:47.030+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-01-15 15:27:45,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-pap | sasl.login.callback.handler.class = null policy-clamp-runtime-acm | [2024-01-15T15:28:25.698+00:00|INFO|Application|main] Started Application in 10.864 seconds (process running for 11.762) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:47.030+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] (Re-)joining group kafka | [2024-01-15 15:27:45,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.class = null policy-clamp-runtime-acm | [2024-01-15T15:28:26.173+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:47.093+00:00|INFO|ParticipantMessagePublisher|main] Sent Participant Register message to CLAMP - ParticipantRegister(super=ParticipantMessage(messageType=PARTICIPANT_REGISTER, messageId=a9a98982-03d2-40b6-95f7-d1c6304988cd, timestamp=2024-01-15T15:27:44.811251262Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c90, automationCompositionId=null, compositionId=null), participantSupportedElementType=[ParticipantSupportedElementType(id=d19a6d28-a7d0-4fb8-a2bf-addcffc2e329, typeName=org.onap.policy.clamp.acm.SimAutomationCompositionElement, typeVersion=1.0.0)]) kafka | [2024-01-15 15:27:45,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.login.connect.timeout.ms = null policy-clamp-runtime-acm | [2024-01-15T15:28:26.174+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 7 with epoch 0 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:47.097+00:00|INFO|Application|main] Started Application in 18.332 seconds (process running for 20.111) kafka | [2024-01-15 15:27:45,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null policy-clamp-runtime-acm | [2024-01-15T15:28:26.175+00:00|INFO|Metadata|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Cluster ID: f-_bmQWWQMKgLbbohjyq1w policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:50.082+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Successfully joined group with generation Generation{generationId=1, memberId='consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2-3b6606cc-038f-415a-9dc8-f836b93e7c8f', protocol='range'} policy-db-migrator | kafka | [2024-01-15 15:27:45,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-clamp-runtime-acm | [2024-01-15T15:28:26.176+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:50.092+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Finished assignment for group at generation 1: {consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2-3b6606cc-038f-415a-9dc8-f836b93e7c8f=Assignment(partitions=[policy-acruntime-participant-0])} policy-db-migrator | kafka | [2024-01-15 15:27:45,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-clamp-runtime-acm | [2024-01-15T15:28:26.184+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] (Re-)joining group policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:50.133+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Successfully synced group in generation Generation{generationId=1, memberId='consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2-3b6606cc-038f-415a-9dc8-f836b93e7c8f', protocol='range'} kafka | [2024-01-15 15:27:45,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-pap | sasl.login.refresh.window.factor = 0.8 policy-clamp-runtime-acm | [2024-01-15T15:28:26.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Request joining group due to: need to re-join with the given member-id: consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2-9e834808-7054-48b9-9bb6-b00f229f65f7 policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:50.134+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Notifying assignor about the new Assignment(partitions=[policy-acruntime-participant-0]) kafka | [2024-01-15 15:27:45,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-clamp-runtime-acm | [2024-01-15T15:28:26.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:50.137+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Adding newly assigned partitions: policy-acruntime-participant-0 kafka | [2024-01-15 15:27:45,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-clamp-runtime-acm | [2024-01-15T15:28:26.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] (Re-)joining group policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:50.157+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Found no committed offset for partition policy-acruntime-participant-0 kafka | [2024-01-15 15:27:45,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 policy-clamp-runtime-acm | [2024-01-15T15:28:29.203+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Successfully joined group with generation Generation{generationId=1, memberId='consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2-9e834808-7054-48b9-9bb6-b00f229f65f7', protocol='range'} policy-clamp-ac-sim-ppnt | [2024-01-15T15:27:50.176+00:00|INFO|SubscriptionState|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2, groupId=845348fa-d712-41dc-bc31-ba3c79964bd7] Resetting offset for partition policy-acruntime-participant-0 to position FetchPosition{offset=3, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-01-15 15:27:45,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI policy-clamp-runtime-acm | [2024-01-15T15:28:29.213+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Finished assignment for group at generation 1: {consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2-9e834808-7054-48b9-9bb6-b00f229f65f7=Assignment(partitions=[policy-acruntime-participant-0])} policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:06.541+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-clamp-runtime-acm | [2024-01-15T15:28:29.231+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Successfully synced group in generation Generation{generationId=1, memberId='consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2-9e834808-7054-48b9-9bb6-b00f229f65f7', protocol='range'} policy-clamp-ac-sim-ppnt | {"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_REGISTER","messageId":"c20e92a6-a0cb-4d1c-b7de-8adb09ada244","timestamp":"2024-01-15T15:28:05.836283997Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} kafka | [2024-01-15 15:27:45,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-pap | sasl.oauthbearer.expected.audience = null policy-clamp-runtime-acm | [2024-01-15T15:28:29.232+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Notifying assignor about the new Assignment(partitions=[policy-acruntime-participant-0]) policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:06.544+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_REGISTER kafka | [2024-01-15 15:27:45,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null policy-clamp-runtime-acm | [2024-01-15T15:28:29.236+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Adding newly assigned partitions: policy-acruntime-participant-0 policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.905+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-clamp-runtime-acm | [2024-01-15T15:28:29.243+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Found no committed offset for partition policy-acruntime-participant-0 policy-clamp-ac-sim-ppnt | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"552fa693-0f35-4f3d-bcba-48cfac49cb30","timestamp":"2024-01-15T15:28:30.853208592Z"} kafka | [2024-01-15 15:27:45,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-clamp-runtime-acm | [2024-01-15T15:28:29.253+00:00|INFO|SubscriptionState|KAFKA-source-policy-acruntime-participant] [Consumer clientId=consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2, groupId=53971418-7c64-4b4a-8b2e-3deb55882781] Resetting offset for partition policy-acruntime-participant-0 to position FetchPosition{offset=4, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.919+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [OUT|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-clamp-runtime-acm | [2024-01-15T15:28:29.852+00:00|INFO|[/onap/policy/clamp/acm]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"d19a6d28-a7d0-4fb8-a2bf-addcffc2e329","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"c5714361-0311-4541-aeaa-881ea2ed50d9","timestamp":"2024-01-15T15:28:30.909550659Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} kafka | [2024-01-15 15:27:45,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-clamp-runtime-acm | [2024-01-15T15:28:29.852+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.954+00:00|INFO|ParticipantMessagePublisher|KAFKA-source-policy-acruntime-participant] Sent Participant Status message to CLAMP - ParticipantStatus(super=ParticipantMessage(messageType=PARTICIPANT_STATUS, messageId=c5714361-0311-4541-aeaa-881ea2ed50d9, timestamp=2024-01-15T15:28:30.909550659Z, participantId=101c62b3-8918-41b9-a747-d21eb79c6c90, automationCompositionId=null, compositionId=null), state=ON_LINE, participantDefinitionUpdates=[], automationCompositionInfoList=[], participantSupportedElementType=[ParticipantSupportedElementType(id=d19a6d28-a7d0-4fb8-a2bf-addcffc2e329, typeName=org.onap.policy.clamp.acm.SimAutomationCompositionElement, typeVersion=1.0.0)]) kafka | [2024-01-15 15:27:45,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-clamp-runtime-acm | [2024-01-15T15:28:29.855+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.962+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-clamp-runtime-acm | [2024-01-15T15:28:30.856+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"d19a6d28-a7d0-4fb8-a2bf-addcffc2e329","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"c5714361-0311-4541-aeaa-881ea2ed50d9","timestamp":"2024-01-15T15:28:30.909550659Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} kafka | [2024-01-15 15:27:45,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-clamp-runtime-acm | [] policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.963+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS kafka | [2024-01-15 15:27:45,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT policy-clamp-runtime-acm | [2024-01-15T15:28:30.857+00:00|INFO|network|http-nio-6969-exec-6] [OUT|KAFKA|policy-acruntime-participant] policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.963+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | security.providers = null policy-clamp-runtime-acm | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"552fa693-0f35-4f3d-bcba-48cfac49cb30","timestamp":"2024-01-15T15:28:30.853208592Z"} policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"8ccc2300-50ed-4075-a45b-c429651e9a40","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"d215a98e-7258-4498-8b3e-98d0866bfe7e","timestamp":"2024-01-15T15:28:30.907805883Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} kafka | [2024-01-15 15:27:45,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | send.buffer.bytes = 131072 policy-clamp-runtime-acm | [2024-01-15T15:28:30.967+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.963+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS kafka | [2024-01-15 15:27:45,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-pap | session.timeout.ms = 45000 policy-clamp-runtime-acm | {"messageType":"PARTICIPANT_STATUS_REQ","messageId":"552fa693-0f35-4f3d-bcba-48cfac49cb30","timestamp":"2024-01-15T15:28:30.853208592Z"} policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.992+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-clamp-runtime-acm | [2024-01-15T15:28:30.968+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS_REQ policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"7f97fcb8-2a7c-4f99-b027-f3849613ccbf","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"ccd5cbb5-86b9-45b0-aa86-b9901ef300a9","timestamp":"2024-01-15T15:28:30.922233847Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} kafka | [2024-01-15 15:27:45,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-clamp-runtime-acm | [2024-01-15T15:28:30.989+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:30.992+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"d19a6d28-a7d0-4fb8-a2bf-addcffc2e329","typeName":"org.onap.policy.clamp.acm.SimAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"c5714361-0311-4541-aeaa-881ea2ed50d9","timestamp":"2024-01-15T15:28:30.909550659Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90"} kafka | [2024-01-15 15:27:45,964] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:31.007+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-clamp-runtime-acm | [2024-01-15T15:28:31.130+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-ac-sim-ppnt | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"aa175f6c-327a-44d3-a931-143c53626cc3","timestamp":"2024-01-15T15:28:30.960765419Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} kafka | [2024-01-15 15:27:45,964] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"8ccc2300-50ed-4075-a45b-c429651e9a40","typeName":"org.onap.policy.clamp.acm.HttpAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"d215a98e-7258-4498-8b3e-98d0866bfe7e","timestamp":"2024-01-15T15:28:30.907805883Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01"} policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:31.007+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_STATUS policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-01-15 15:27:45,964] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-clamp-runtime-acm | [2024-01-15T15:28:31.167+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.291+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-pap | ssl.engine.factory.class = null kafka | [2024-01-15 15:27:45,964] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-clamp-ac-sim-ppnt | {"messageType":"PARTICIPANT_PRIME","messageId":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","timestamp":"2024-01-15T15:28:45.269562160Z","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d"} policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"7f97fcb8-2a7c-4f99-b027-f3849613ccbf","typeName":"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"ccd5cbb5-86b9-45b0-aa86-b9901ef300a9","timestamp":"2024-01-15T15:28:30.922233847Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02"} kafka | [2024-01-15 15:27:45,964] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-pap | ssl.key.password = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.306+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | [2024-01-15T15:28:31.232+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,964] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 policy-clamp-ac-sim-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} policy-clamp-runtime-acm | {"state":"ON_LINE","participantDefinitionUpdates":[],"automationCompositionInfoList":[],"participantSupportedElementType":[{"id":"ca3110f9-47ca-4135-be0d-db3862b51b45","typeName":"org.onap.policy.clamp.acm.PolicyAutomationCompositionElement","typeVersion":"1.0.0"}],"messageType":"PARTICIPANT_STATUS","messageId":"aa175f6c-327a-44d3-a931-143c53626cc3","timestamp":"2024-01-15T15:28:30.960765419Z","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03"} kafka | [2024-01-15 15:27:45,964] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.307+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-runtime-acm | [2024-01-15T15:28:45.270+00:00|INFO|network|pool-3-thread-1] [OUT|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,965] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.key = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.317+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | {"messageType":"PARTICIPANT_PRIME","messageId":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","timestamp":"2024-01-15T15:28:45.269562160Z","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d"} kafka | [2024-01-15 15:27:45,970] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-pap | ssl.keystore.location = null policy-clamp-ac-sim-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} policy-clamp-runtime-acm | [2024-01-15T15:28:45.293+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.317+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-runtime-acm | {"messageType":"PARTICIPANT_PRIME","messageId":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","timestamp":"2024-01-15T15:28:45.269562160Z","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d"} kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-pap | ssl.keystore.type = JKS policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.321+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | [2024-01-15T15:28:45.294+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 policy-clamp-ac-sim-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} policy-clamp-runtime-acm | [2024-01-15T15:28:45.321+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | policy-pap | ssl.provider = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.322+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-runtime-acm | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c01","state":"ON_LINE"} kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | policy-pap | ssl.secure.random.implementation = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.400+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | [2024-01-15T15:28:45.398+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | ssl.trustmanager.algorithm = PKIX policy-clamp-ac-sim-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90","state":"ON_LINE"} policy-clamp-runtime-acm | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c02","state":"ON_LINE"} kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.408+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] policy-clamp-runtime-acm | [2024-01-15T15:28:45.417+00:00|ERROR|SupervisionHandler|KAFKA-source-policy-acruntime-participant] AC Definition 46f66bfb-4746-4a5e-be64-7f61ac30302d already primed/deprimed with participant 101c62b3-8918-41b9-a747-d21eb79c6c02 kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | ssl.truststore.location = null policy-clamp-ac-sim-ppnt | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90","state":"ON_LINE"} policy-clamp-runtime-acm | [2024-01-15T15:28:45.417+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,971] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null policy-clamp-ac-sim-ppnt | [2024-01-15T15:28:45.408+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-acruntime-participant] discarding event of type PARTICIPANT_PRIME_ACK policy-clamp-runtime-acm | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c03","state":"ON_LINE"} kafka | [2024-01-15 15:27:45,972] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.type = JKS policy-clamp-runtime-acm | [2024-01-15T15:28:45.428+00:00|ERROR|SupervisionHandler|KAFKA-source-policy-acruntime-participant] AC Definition 46f66bfb-4746-4a5e-be64-7f61ac30302d already primed/deprimed with participant 101c62b3-8918-41b9-a747-d21eb79c6c03 kafka | [2024-01-15 15:27:45,972] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-clamp-runtime-acm | [2024-01-15T15:28:45.432+00:00|INFO|network|KAFKA-source-policy-acruntime-participant] [IN|KAFKA|policy-acruntime-participant] kafka | [2024-01-15 15:27:45,972] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-pap | policy-clamp-runtime-acm | {"compositionState":"COMMISSIONED","responseTo":"0a1cfa63-e1a8-484a-96db-5b50eb1b7aa9","result":true,"stateChangeResult":"NO_ERROR","message":"Deprimed","messageType":"PARTICIPANT_PRIME_ACK","compositionId":"46f66bfb-4746-4a5e-be64-7f61ac30302d","participantId":"101c62b3-8918-41b9-a747-d21eb79c6c90","state":"ON_LINE"} kafka | [2024-01-15 15:27:45,972] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.571+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-clamp-runtime-acm | [2024-01-15T15:28:45.444+00:00|ERROR|SupervisionHandler|KAFKA-source-policy-acruntime-participant] AC Definition 46f66bfb-4746-4a5e-be64-7f61ac30302d already primed/deprimed with participant 101c62b3-8918-41b9-a747-d21eb79c6c90 kafka | [2024-01-15 15:27:45,972] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | [2024-01-15T15:28:11.571+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-clamp-runtime-acm | [2024-01-15T15:28:45.615+00:00|INFO|SupervisionAspect|scheduling-1] Add scheduled scanning kafka | [2024-01-15 15:27:45,972] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.571+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332491571 kafka | [2024-01-15 15:27:45,972] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.571+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-01-15 15:27:45,972] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-acruntime-participant', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-acruntime-participant-0 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.571+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-pap | [2024-01-15T15:28:11.571+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9f2f1987-1557-4311-a503-1d74b8bc37d4, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.572+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=aa559ce3-1840-4027-b443-4b66dabb9280, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-pap | [2024-01-15T15:28:11.572+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e7eeb9bc-4558-494d-86bc-0768acb6be1f, alive=false, publisher=null]]: starting kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.592+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | policy-pap | acks = -1 kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | policy-pap | auto.include.jmx.reporter = true kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-pap | batch.size = 16384 kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-01-15 15:27:45,973] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-pap | buffer.memory = 33554432 kafka | [2024-01-15 15:27:45,974] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-15 15:27:45,974] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | policy-pap | client.id = producer-1 kafka | [2024-01-15 15:27:45,974] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | policy-pap | compression.type = none kafka | [2024-01-15 15:27:45,974] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-pap | connections.max.idle.ms = 540000 kafka | [2024-01-15 15:27:45,974] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | -------------- policy-pap | delivery.timeout.ms = 120000 kafka | [2024-01-15 15:27:45,974] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | enable.idempotence = true policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,974] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | interceptor.classes = [] policy-db-migrator | kafka | [2024-01-15 15:27:45,974] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | linger.ms = 0 policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | max.block.ms = 60000 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-pap | max.request.size = 1048576 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | > upgrade 0780-toscarequirements.sql kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,975] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) kafka | [2024-01-15 15:27:45,976] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,976] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | kafka | [2024-01-15 15:27:45,976] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | kafka | [2024-01-15 15:27:45,976] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | partitioner.class = null policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql kafka | [2024-01-15 15:27:45,976] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | partitioner.ignore.keys = false policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,976] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | receive.buffer.bytes = 32768 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-15 15:27:45,976] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,977] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | kafka | [2024-01-15 15:27:45,983] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | kafka | [2024-01-15 15:27:45,986] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | retries = 2147483647 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-acruntime-participant', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) kafka | [2024-01-15 15:27:45,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | [2024-01-15 15:27:45,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | kafka | [2024-01-15 15:27:45,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.class = null kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-15 15:27:45,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-01-15 15:27:45,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.providers = null policy-db-migrator | kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | kafka | [2024-01-15 15:27:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | kafka | [2024-01-15 15:27:45,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-01-15 15:27:45,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) kafka | [2024-01-15 15:27:45,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- kafka | [2024-01-15 15:27:45,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | kafka | [2024-01-15 15:27:45,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | kafka | [2024-01-15 15:27:45,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | ssl.keystore.type = JKS kafka | [2024-01-15 15:27:45,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-01-15 15:27:45,992] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-pap | ssl.provider = null kafka | [2024-01-15 15:27:45,992] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-15 15:27:45,992] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-15 15:27:45,992] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.certificates = null kafka | [2024-01-15 15:27:45,992] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | ssl.truststore.location = null kafka | [2024-01-15 15:27:45,992] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null kafka | [2024-01-15 15:27:45,993] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | ssl.truststore.type = JKS kafka | [2024-01-15 15:27:45,993] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | transaction.timeout.ms = 60000 kafka | [2024-01-15 15:27:45,993] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | transactional.id = null kafka | [2024-01-15 15:27:46,001] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.613+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-pap | [2024-01-15T15:28:11.631+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.631+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.631+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332491631 kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.632+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e7eeb9bc-4558-494d-86bc-0768acb6be1f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | [2024-01-15T15:28:11.632+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c0d91e8d-9f78-4650-83a5-95661336a408, alive=false, publisher=null]]: starting kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.633+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-pap | acks = -1 kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | batch.size = 16384 kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | buffer.memory = 33554432 kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-acruntime-participant-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-pap | client.id = producer-2 kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | compression.type = none kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | delivery.timeout.ms = 120000 kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-pap | enable.idempotence = true kafka | [2024-01-15 15:27:46,002] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | interceptor.classes = [] kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | linger.ms = 0 kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.block.ms = 60000 policy-db-migrator | policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | max.request.size = 1048576 kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.class = null policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) kafka | [2024-01-15 15:27:46,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.ignore.keys = false policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | receive.buffer.bytes = 32768 policy-db-migrator | kafka | [2024-01-15 15:27:46,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | kafka | [2024-01-15 15:27:46,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | reconnect.backoff.ms = 50 kafka | [2024-01-15 15:27:46,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | request.timeout.ms = 30000 kafka | [2024-01-15 15:27:46,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | retries = 2147483647 kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-pap | sasl.login.class = null kafka | [2024-01-15 15:27:46,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-01-15 15:27:46,010] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-15 15:27:46,016] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.mechanism = GSSAPI kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | policy-pap | security.providers = null kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | policy-pap | send.buffer.bytes = 131072 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.cipher.suites = null kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | policy-pap | ssl.engine.factory.class = null kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-acruntime-participant-0 (state.change.logger) policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | ssl.key.password = null kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-15 15:27:46,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.location = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.password = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | ssl.keystore.type = JKS kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.provider = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.certificates = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-pap | ssl.truststore.location = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.truststore.type = JKS kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | -------------- policy-pap | transaction.timeout.ms = 60000 kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | policy-pap | transactional.id = null kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.633+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-01-15T15:28:11.636+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-pap | [2024-01-15T15:28:11.636+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | kafka | [2024-01-15 15:27:46,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.636+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705332491636 kafka | [2024-01-15 15:27:46,041] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, policy-acruntime-participant-0, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-db-migrator | > upgrade 0100-pdp.sql policy-pap | [2024-01-15T15:28:11.636+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c0d91e8d-9f78-4650-83a5-95661336a408, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-01-15 15:27:46,041] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.636+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator kafka | [2024-01-15 15:27:46,072] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-pap | [2024-01-15T15:28:11.636+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher kafka | [2024-01-15 15:27:46,146] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.639+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher kafka | [2024-01-15 15:27:46,156] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.639+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers kafka | [2024-01-15 15:27:46,157] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.644+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers kafka | [2024-01-15 15:27:46,158] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | [2024-01-15T15:28:11.644+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock kafka | [2024-01-15 15:27:46,162] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.644+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests kafka | [2024-01-15 15:27:46,178] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | [2024-01-15T15:28:11.644+00:00|INFO|TimerManager|Thread-10] timer manager state-change started kafka | [2024-01-15 15:27:46,180] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:11.645+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer kafka | [2024-01-15 15:27:46,180] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.648+00:00|INFO|ServiceManager|main] Policy PAP started kafka | [2024-01-15 15:27:46,180] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-15T15:28:11.649+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 12.839 seconds (process running for 13.636) kafka | [2024-01-15 15:27:46,180] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | [2024-01-15T15:28:11.650+00:00|INFO|TimerManager|Thread-9] timer manager update started kafka | [2024-01-15 15:27:46,187] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:12.096+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-15 15:27:46,188] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | [2024-01-15T15:28:12.096+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: f-_bmQWWQMKgLbbohjyq1w kafka | [2024-01-15 15:27:46,188] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:12.096+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: f-_bmQWWQMKgLbbohjyq1w kafka | [2024-01-15 15:27:46,188] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-15T15:28:12.096+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Cluster ID: f-_bmQWWQMKgLbbohjyq1w kafka | [2024-01-15 15:27:46,189] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:12.098+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-15 15:27:46,196] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-pap | [2024-01-15T15:28:12.098+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: f-_bmQWWQMKgLbbohjyq1w kafka | [2024-01-15 15:27:46,197] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:12.107+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-01-15 15:27:46,197] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-pap | [2024-01-15T15:28:12.107+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 5 with epoch 0 kafka | [2024-01-15 15:27:46,197] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:12.107+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-01-15 15:27:46,197] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:12.111+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 4 with epoch 0 kafka | [2024-01-15 15:27:46,204] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-15T15:28:12.129+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-01-15 15:27:46,204] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-pap | [2024-01-15T15:28:12.132+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] (Re-)joining group kafka | [2024-01-15 15:27:46,205] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:12.167+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-504795a2-351e-499c-b91f-aa55dfac1219 kafka | [2024-01-15 15:27:46,205] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-pap | [2024-01-15T15:28:12.168+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,205] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:12.168+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | kafka | [2024-01-15 15:27:46,215] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:12.168+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Request joining group due to: need to re-join with the given member-id: consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3-1194ee19-c90a-46e2-bc39-435fa58f7711 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,216] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:12.168+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) kafka | [2024-01-15 15:27:46,216] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:12.168+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] (Re-)joining group policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,216] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:15.173+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Successfully joined group with generation Generation{generationId=1, memberId='consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3-1194ee19-c90a-46e2-bc39-435fa58f7711', protocol='range'} policy-db-migrator | kafka | [2024-01-15 15:27:46,217] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:15.174+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-504795a2-351e-499c-b91f-aa55dfac1219', protocol='range'} policy-db-migrator | kafka | [2024-01-15 15:27:46,228] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:15.181+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Finished assignment for group at generation 1: {consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3-1194ee19-c90a-46e2-bc39-435fa58f7711=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | > upgrade 0150-pdpstatistics.sql kafka | [2024-01-15 15:27:46,229] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:15.183+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-504795a2-351e-499c-b91f-aa55dfac1219=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,229] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:15.190+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-504795a2-351e-499c-b91f-aa55dfac1219', protocol='range'} policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-01-15 15:27:46,229] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:15.191+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,229] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:15.192+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Successfully synced group in generation Generation{generationId=1, memberId='consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3-1194ee19-c90a-46e2-bc39-435fa58f7711', protocol='range'} kafka | [2024-01-15 15:27:46,240] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-15T15:28:15.193+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-01-15 15:27:46,241] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-pap | [2024-01-15T15:28:15.198+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-01-15 15:27:46,241] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:15.198+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-01-15 15:27:46,241] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-pap | [2024-01-15T15:28:15.205+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-01-15 15:27:46,241] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:15.209+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-01-15 15:27:46,263] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-15T15:28:15.215+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3, groupId=aa559ce3-1840-4027-b443-4b66dabb9280] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-01-15 15:27:46,272] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-15T15:28:15.216+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-01-15 15:27:46,272] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-pap | [2024-01-15T15:28:31.547+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' kafka | [2024-01-15 15:27:46,272] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:31.547+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' kafka | [2024-01-15 15:27:46,273] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-pap | [2024-01-15T15:28:31.553+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 4 ms kafka | [2024-01-15 15:27:46,282] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | JOIN pdpstatistics b policy-pap | [2024-01-15T15:28:33.037+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: kafka | [2024-01-15 15:27:46,284] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-pap | [] kafka | [2024-01-15 15:27:46,284] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-db-migrator | SET a.id = b.id policy-pap | [2024-01-15T15:28:33.038+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-15 15:27:46,284] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"94e19f88-e655-41ee-a7e4-15c2eb23ed16","timestampMs":1705332513011,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} kafka | [2024-01-15 15:27:46,284] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-15T15:28:33.038+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-15 15:27:46,296] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"94e19f88-e655-41ee-a7e4-15c2eb23ed16","timestampMs":1705332513011,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} kafka | [2024-01-15 15:27:46,297] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-pap | [2024-01-15T15:28:33.045+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-01-15 15:27:46,297] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:33.173+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate starting kafka | [2024-01-15 15:27:46,297] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-pap | [2024-01-15T15:28:33.173+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate starting listener kafka | [2024-01-15 15:27:46,297] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:33.173+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate starting timer kafka | [2024-01-15 15:27:46,305] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-15T15:28:33.173+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=3a4bff02-b148-4906-89e0-3452ec19425b, expireMs=1705332543173] kafka | [2024-01-15 15:27:46,306] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-15T15:28:33.175+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate starting enqueue kafka | [2024-01-15 15:27:46,306] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-pap | [2024-01-15T15:28:33.175+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate started kafka | [2024-01-15 15:27:46,306] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:33.175+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=3a4bff02-b148-4906-89e0-3452ec19425b, expireMs=1705332543173] kafka | [2024-01-15 15:27:46,306] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-pap | [2024-01-15T15:28:33.179+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"3a4bff02-b148-4906-89e0-3452ec19425b","timestampMs":1705332513156,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-15 15:27:46,316] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-15T15:28:33.216+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-15 15:27:46,317] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"3a4bff02-b148-4906-89e0-3452ec19425b","timestampMs":1705332513156,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-15 15:27:46,317] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-pap | [2024-01-15T15:28:33.217+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-01-15 15:27:46,317] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-15T15:28:33.222+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-15 15:27:46,317] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"3a4bff02-b148-4906-89e0-3452ec19425b","timestampMs":1705332513156,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-15 15:27:46,333] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,334] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.222+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | kafka | [2024-01-15 15:27:46,334] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.236+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-01-15 15:27:46,334] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"3a4bff02-b148-4906-89e0-3452ec19425b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"8e34aa81-cc44-41ed-9fa1-c9758c7eb343","timestampMs":1705332513227,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0210-sequence.sql kafka | [2024-01-15 15:27:46,334] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.237+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopping policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,345] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.238+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopping enqueue policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-15 15:27:46,346] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.238+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopping timer policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,346] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.238+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=3a4bff02-b148-4906-89e0-3452ec19425b, expireMs=1705332543173] policy-db-migrator | kafka | [2024-01-15 15:27:46,349] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.238+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopping listener policy-db-migrator | kafka | [2024-01-15 15:27:46,349] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.238+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopped policy-db-migrator | > upgrade 0220-sequence.sql kafka | [2024-01-15 15:27:46,359] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,360] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.239+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-01-15 15:27:46,360] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"3a4bff02-b148-4906-89e0-3452ec19425b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"8e34aa81-cc44-41ed-9fa1-c9758c7eb343","timestampMs":1705332513227,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,360] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.240+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 3a4bff02-b148-4906-89e0-3452ec19425b policy-db-migrator | kafka | [2024-01-15 15:27:46,360] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.240+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | kafka | [2024-01-15 15:27:46,370] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"b9d06743-04f4-4bbd-9c7d-b4492be21465","timestampMs":1705332513226,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql kafka | [2024-01-15 15:27:46,371] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.245+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate successful policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,371] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 start publishing next request policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) kafka | [2024-01-15 15:27:46,371] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange starting policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,371] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange starting listener policy-db-migrator | kafka | [2024-01-15 15:27:46,377] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange starting timer policy-db-migrator | kafka | [2024-01-15 15:27:46,377] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=9dc3ba67-47f7-4531-935c-8f19ab3aa9d2, expireMs=1705332543246] policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql kafka | [2024-01-15 15:27:46,377] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange starting enqueue policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,377] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange started policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-01-15 15:27:46,377] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=9dc3ba67-47f7-4531-935c-8f19ab3aa9d2, expireMs=1705332543246] policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,387] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.246+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-01-15 15:27:46,388] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9dc3ba67-47f7-4531-935c-8f19ab3aa9d2","timestampMs":1705332513157,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-01-15 15:27:46,389] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.295+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0120-toscatrigger.sql kafka | [2024-01-15 15:27:46,389] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"b9d06743-04f4-4bbd-9c7d-b4492be21465","timestampMs":1705332513226,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,389] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.295+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | DROP TABLE IF EXISTS toscatrigger kafka | [2024-01-15 15:27:46,396] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.300+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,396] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9dc3ba67-47f7-4531-935c-8f19ab3aa9d2","timestampMs":1705332513157,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-01-15 15:27:46,396] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.300+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-db-migrator | kafka | [2024-01-15 15:27:46,396] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.303+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql kafka | [2024-01-15 15:27:46,396] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9dc3ba67-47f7-4531-935c-8f19ab3aa9d2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"0b62d888-5f9f-45ef-a8c4-5186ef39c42f","timestampMs":1705332513257,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,402] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.330+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange stopping policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB kafka | [2024-01-15 15:27:46,402] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.330+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange stopping enqueue policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,402] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.330+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange stopping timer policy-db-migrator | kafka | [2024-01-15 15:27:46,402] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.330+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=9dc3ba67-47f7-4531-935c-8f19ab3aa9d2, expireMs=1705332543246] policy-db-migrator | kafka | [2024-01-15 15:27:46,402] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.330+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange stopping listener policy-db-migrator | > upgrade 0140-toscaparameter.sql kafka | [2024-01-15 15:27:46,410] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.330+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange stopped policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,410] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.331+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpStateChange successful policy-db-migrator | DROP TABLE IF EXISTS toscaparameter kafka | [2024-01-15 15:27:46,410] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.331+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 start publishing next request policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,410] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.333+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate starting policy-db-migrator | kafka | [2024-01-15 15:27:46,410] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.333+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate starting listener policy-db-migrator | kafka | [2024-01-15 15:27:46,422] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.333+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate starting timer policy-db-migrator | > upgrade 0150-toscaproperty.sql kafka | [2024-01-15 15:27:46,423] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.334+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=186de137-5a34-429d-99f9-1d40097069e1, expireMs=1705332543334] policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,423] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.334+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate starting enqueue policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-01-15 15:27:46,423] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.334+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate started policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,423] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.334+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-01-15 15:27:46,430] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"186de137-5a34-429d-99f9-1d40097069e1","timestampMs":1705332513289,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,430] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.334+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-01-15 15:27:46,430] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9dc3ba67-47f7-4531-935c-8f19ab3aa9d2","timestampMs":1705332513157,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,430] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.335+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-db-migrator | kafka | [2024-01-15 15:27:46,430] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,439] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.342+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-01-15 15:27:46,440] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9dc3ba67-47f7-4531-935c-8f19ab3aa9d2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"0b62d888-5f9f-45ef-a8c4-5186ef39c42f","timestampMs":1705332513257,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,440] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.342+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-01-15 15:27:46,440] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"186de137-5a34-429d-99f9-1d40097069e1","timestampMs":1705332513289,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-01-15 15:27:46,440] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.342+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-01-15 15:27:46,452] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.342+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9dc3ba67-47f7-4531-935c-8f19ab3aa9d2 policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,453] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.346+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-01-15 15:27:46,453] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-pap | {"source":"pap-ca0c3d5f-63b9-4ccf-b827-a7b04baa6325","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"186de137-5a34-429d-99f9-1d40097069e1","timestampMs":1705332513289,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,453] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.346+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | kafka | [2024-01-15 15:27:46,453] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.355+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,464] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"186de137-5a34-429d-99f9-1d40097069e1","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae00df6d-b992-4fdd-9c88-e5121e0a590c","timestampMs":1705332513347,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-01-15 15:27:46,465] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.355+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,465] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"186de137-5a34-429d-99f9-1d40097069e1","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae00df6d-b992-4fdd-9c88-e5121e0a590c","timestampMs":1705332513347,"name":"apex-fdd8fd74-61db-44b2-a31b-65bcad895850","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-01-15 15:27:46,465] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.356+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 186de137-5a34-429d-99f9-1d40097069e1 policy-db-migrator | kafka | [2024-01-15 15:27:46,465] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopping policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-01-15 15:27:46,472] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopping enqueue policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,472] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopping timer policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-01-15 15:27:46,474] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.356+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=186de137-5a34-429d-99f9-1d40097069e1, expireMs=1705332543334] policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,474] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-15T15:28:33.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopping listener policy-db-migrator | kafka | [2024-01-15 15:27:46,474] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-15T15:28:33.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate stopped policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,478] INFO [LogLoader partition=policy-acruntime-participant-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-15T15:28:33.363+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 PdpUpdate successful policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) kafka | [2024-01-15 15:27:46,479] INFO Created log for partition policy-acruntime-participant-0 in /var/lib/kafka/data/policy-acruntime-participant-0 with properties {} (kafka.log.LogManager) policy-pap | [2024-01-15T15:28:33.363+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fdd8fd74-61db-44b2-a31b-65bcad895850 has no more requests policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,479] INFO [Partition policy-acruntime-participant-0 broker=1] No checkpointed highwatermark is found for partition policy-acruntime-participant-0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,479] INFO [Partition policy-acruntime-participant-0 broker=1] Log loaded for partition policy-acruntime-participant-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,479] INFO [Broker id=1] Leader policy-acruntime-participant-0 with topic id Some(5o5LE87SQnOpV49rYbYX3g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,484] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql kafka | [2024-01-15 15:27:46,493] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,493] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-db-migrator | select 'upgrade to 1100 completed' as msg kafka | [2024-01-15 15:27:46,493] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-15 15:27:46,494] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-15 15:27:46,506] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-15 15:27:46,507] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-15 15:27:46,507] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-01-15 15:27:46,507] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-15 15:27:46,508] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-15 15:27:46,540] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-15 15:27:46,541] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-15 15:27:46,545] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-01-15 15:27:46,545] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-15 15:27:46,546] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,552] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-15 15:27:46,565] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | msg kafka | [2024-01-15 15:27:46,565] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-db-migrator | upgrade to 1100 completed kafka | [2024-01-15 15:27:46,565] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,565] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql kafka | [2024-01-15 15:27:46,582] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,583] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-15 15:27:46,583] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,583] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-01-15 15:27:46,583] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,594] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-01-15 15:27:46,595] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,595] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-01-15 15:27:46,595] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,595] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,604] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-15 15:27:46,605] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,605] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,605] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,605] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-01-15 15:27:46,618] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-15 15:27:46,618] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-15 15:27:46,618] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,618] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,618] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-15 15:27:46,630] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-01-15 15:27:46,631] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,631] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-15 15:27:46,631] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,631] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-15 15:27:46,642] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,643] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-01-15 15:27:46,643] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,643] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,643] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,658] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | TRUNCATE TABLE sequence kafka | [2024-01-15 15:27:46,658] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,658] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,658] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,659] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-01-15 15:27:46,668] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-01-15 15:27:46,669] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,669] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,670] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-15 15:27:46,670] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,676] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE pdpstatistics kafka | [2024-01-15 15:27:46,677] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,677] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,677] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,677] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql kafka | [2024-01-15 15:27:46,688] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,689] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-01-15 15:27:46,689] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,690] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | kafka | [2024-01-15 15:27:46,690] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-15 15:27:46,697] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0120-statistics_sequence.sql kafka | [2024-01-15 15:27:46,697] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,697] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE statistics_sequence kafka | [2024-01-15 15:27:46,698] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-15 15:27:46,698] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-15 15:27:46,710] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version kafka | [2024-01-15 15:27:46,718] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policyadmin 1300 kafka | [2024-01-15 15:27:46,718] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-db-migrator | ID script operation from_version to_version tag success atTime kafka | [2024-01-15 15:27:46,718] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:28 kafka | [2024-01-15 15:27:46,718] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:28 kafka | [2024-01-15 15:27:46,739] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:28 kafka | [2024-01-15 15:27:46,739] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,740] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,740] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,740] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,757] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,758] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,758] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,758] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,758] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,776] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,777] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,783] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,783] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,783] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,803] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,804] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,804] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,804] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,804] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,817] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:29 kafka | [2024-01-15 15:27:46,818] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,818] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,818] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,818] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,882] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,883] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,883] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,883] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,884] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(lbqh0qd0Q4eTI3B1U7lVXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,892] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:30 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-acruntime-participant-0 (state.change.logger) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:31 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,893] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,902] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,907] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 kafka | [2024-01-15 15:27:46,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:32 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,910] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,910] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1501241527280800u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:33 kafka | [2024-01-15 15:27:46,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,911] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1501241527280900u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,911] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:34 kafka | [2024-01-15 15:27:46,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1501241527281000u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1501241527281100u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1501241527281200u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1501241527281200u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1501241527281200u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1501241527281200u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,912] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1501241527281300u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,912] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1501241527281300u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1501241527281300u 1 2024-01-15 15:27:35 kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,913] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,913] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,914] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,914] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,915] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,915] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,916] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,916] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:46,916] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,917] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,920] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,920] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,921] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,921] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,921] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,921] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,921] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,921] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,921] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,921] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,922] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,922] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,922] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,922] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,922] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,922] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,922] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,922] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,923] INFO [Broker id=1] Finished LeaderAndIsr request in 937ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-01-15 15:27:46,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,923] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,924] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,925] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,926] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-15 15:27:46,933] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=lbqh0qd0Q4eTI3B1U7lVXA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=5o5LE87SQnOpV49rYbYX3g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-15 15:27:46,939] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-acruntime-participant', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-acruntime-participant-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,939] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,939] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,939] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,939] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,940] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,942] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-15 15:27:46,943] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-15 15:27:46,993] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 75c1079f-283a-4d21-9ddf-3c97158a5ec8 in Empty state. Created a new member id consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2-8bda53c2-b517-48c5-903d-6393b9c731ee and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:47,021] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group e9905a02-978a-4543-aec8-71a3b4617c31 in Empty state. Created a new member id consumer-e9905a02-978a-4543-aec8-71a3b4617c31-2-a2605d7a-e610-42ce-915b-b633f6cc9ac9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:47,023] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 845348fa-d712-41dc-bc31-ba3c79964bd7 in Empty state. Created a new member id consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2-3b6606cc-038f-415a-9dc8-f836b93e7c8f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:47,050] INFO [GroupCoordinator 1]: Preparing to rebalance group e9905a02-978a-4543-aec8-71a3b4617c31 in state PreparingRebalance with old generation 0 (__consumer_offsets-26) (reason: Adding new member consumer-e9905a02-978a-4543-aec8-71a3b4617c31-2-a2605d7a-e610-42ce-915b-b633f6cc9ac9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:47,061] INFO [GroupCoordinator 1]: Preparing to rebalance group 75c1079f-283a-4d21-9ddf-3c97158a5ec8 in state PreparingRebalance with old generation 0 (__consumer_offsets-20) (reason: Adding new member consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2-8bda53c2-b517-48c5-903d-6393b9c731ee with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:47,061] INFO [GroupCoordinator 1]: Preparing to rebalance group 845348fa-d712-41dc-bc31-ba3c79964bd7 in state PreparingRebalance with old generation 0 (__consumer_offsets-15) (reason: Adding new member consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2-3b6606cc-038f-415a-9dc8-f836b93e7c8f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:50,062] INFO [GroupCoordinator 1]: Stabilized group e9905a02-978a-4543-aec8-71a3b4617c31 generation 1 (__consumer_offsets-26) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:50,067] INFO [GroupCoordinator 1]: Stabilized group 75c1079f-283a-4d21-9ddf-3c97158a5ec8 generation 1 (__consumer_offsets-20) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:50,068] INFO [GroupCoordinator 1]: Stabilized group 845348fa-d712-41dc-bc31-ba3c79964bd7 generation 1 (__consumer_offsets-15) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:50,097] INFO [GroupCoordinator 1]: Assignment received from leader consumer-75c1079f-283a-4d21-9ddf-3c97158a5ec8-2-8bda53c2-b517-48c5-903d-6393b9c731ee for group 75c1079f-283a-4d21-9ddf-3c97158a5ec8 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:50,104] INFO [GroupCoordinator 1]: Assignment received from leader consumer-845348fa-d712-41dc-bc31-ba3c79964bd7-2-3b6606cc-038f-415a-9dc8-f836b93e7c8f for group 845348fa-d712-41dc-bc31-ba3c79964bd7 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:27:50,106] INFO [GroupCoordinator 1]: Assignment received from leader consumer-e9905a02-978a-4543-aec8-71a3b4617c31-2-a2605d7a-e610-42ce-915b-b633f6cc9ac9 for group e9905a02-978a-4543-aec8-71a3b4617c31 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:06,524] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group cbd2f33b-08da-4df5-9be7-9783ed68c1a9 in Empty state. Created a new member id consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2-e8065291-3c92-4be5-abaf-df92508c2764 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:06,530] INFO [GroupCoordinator 1]: Preparing to rebalance group cbd2f33b-08da-4df5-9be7-9783ed68c1a9 in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2-e8065291-3c92-4be5-abaf-df92508c2764 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:09,532] INFO [GroupCoordinator 1]: Stabilized group cbd2f33b-08da-4df5-9be7-9783ed68c1a9 generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:09,549] INFO [GroupCoordinator 1]: Assignment received from leader consumer-cbd2f33b-08da-4df5-9be7-9783ed68c1a9-2-e8065291-3c92-4be5-abaf-df92508c2764 for group cbd2f33b-08da-4df5-9be7-9783ed68c1a9 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:12,090] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-01-15 15:28:12,102] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(xp5GYcGpRIeMJ_u9UP8PQQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-01-15 15:28:12,102] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2024-01-15 15:28:12,102] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-15 15:28:12,102] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-01-15 15:28:12,103] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-15 15:28:12,103] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-01-15 15:28:12,113] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-15 15:28:12,113] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-01-15 15:28:12,113] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-01-15 15:28:12,114] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2024-01-15 15:28:12,114] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-15 15:28:12,114] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-01-15 15:28:12,115] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2024-01-15 15:28:12,115] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-15 15:28:12,116] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-01-15 15:28:12,116] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2024-01-15 15:28:12,116] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2024-01-15 15:28:12,119] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-15 15:28:12,119] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-01-15 15:28:12,120] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-01-15 15:28:12,120] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-15 15:28:12,120] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(xp5GYcGpRIeMJ_u9UP8PQQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-15 15:28:12,123] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-01-15 15:28:12,124] INFO [Broker id=1] Finished LeaderAndIsr request in 9ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2024-01-15 15:28:12,124] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=xp5GYcGpRIeMJ_u9UP8PQQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-15 15:28:12,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-15 15:28:12,126] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-15 15:28:12,128] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-15 15:28:12,166] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-504795a2-351e-499c-b91f-aa55dfac1219 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:12,166] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group aa559ce3-1840-4027-b443-4b66dabb9280 in Empty state. Created a new member id consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3-1194ee19-c90a-46e2-bc39-435fa58f7711 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:12,170] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-504795a2-351e-499c-b91f-aa55dfac1219 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:12,170] INFO [GroupCoordinator 1]: Preparing to rebalance group aa559ce3-1840-4027-b443-4b66dabb9280 in state PreparingRebalance with old generation 0 (__consumer_offsets-23) (reason: Adding new member consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3-1194ee19-c90a-46e2-bc39-435fa58f7711 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:13,350] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group a66a1343-5c6d-4c55-860d-3d5ddcaf6219 in Empty state. Created a new member id consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2-dbebc620-f1dc-45e0-b152-ec6429c1646b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:13,352] INFO [GroupCoordinator 1]: Preparing to rebalance group a66a1343-5c6d-4c55-860d-3d5ddcaf6219 in state PreparingRebalance with old generation 0 (__consumer_offsets-14) (reason: Adding new member consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2-dbebc620-f1dc-45e0-b152-ec6429c1646b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:15,172] INFO [GroupCoordinator 1]: Stabilized group aa559ce3-1840-4027-b443-4b66dabb9280 generation 1 (__consumer_offsets-23) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:15,173] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:15,188] INFO [GroupCoordinator 1]: Assignment received from leader consumer-aa559ce3-1840-4027-b443-4b66dabb9280-3-1194ee19-c90a-46e2-bc39-435fa58f7711 for group aa559ce3-1840-4027-b443-4b66dabb9280 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:15,188] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-504795a2-351e-499c-b91f-aa55dfac1219 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:16,354] INFO [GroupCoordinator 1]: Stabilized group a66a1343-5c6d-4c55-860d-3d5ddcaf6219 generation 1 (__consumer_offsets-14) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:16,366] INFO [GroupCoordinator 1]: Assignment received from leader consumer-a66a1343-5c6d-4c55-860d-3d5ddcaf6219-2-dbebc620-f1dc-45e0-b152-ec6429c1646b for group a66a1343-5c6d-4c55-860d-3d5ddcaf6219 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:26,196] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 53971418-7c64-4b4a-8b2e-3deb55882781 in Empty state. Created a new member id consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2-9e834808-7054-48b9-9bb6-b00f229f65f7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:26,200] INFO [GroupCoordinator 1]: Preparing to rebalance group 53971418-7c64-4b4a-8b2e-3deb55882781 in state PreparingRebalance with old generation 0 (__consumer_offsets-12) (reason: Adding new member consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2-9e834808-7054-48b9-9bb6-b00f229f65f7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:29,201] INFO [GroupCoordinator 1]: Stabilized group 53971418-7c64-4b4a-8b2e-3deb55882781 generation 1 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-15 15:28:29,220] INFO [GroupCoordinator 1]: Assignment received from leader consumer-53971418-7c64-4b4a-8b2e-3deb55882781-2-9e834808-7054-48b9-9bb6-b00f229f65f7 for group 53971418-7c64-4b4a-8b2e-3deb55882781 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-clamp-runtime-acm ... Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping policy-clamp-ac-pf-ppnt ... Stopping policy-api ... Stopping policy-clamp-ac-http-ppnt ... Stopping policy-clamp-ac-sim-ppnt ... Stopping policy-clamp-ac-k8s-ppnt ... Stopping kafka ... Stopping compose_zookeeper_1 ... Stopping simulator ... Stopping mariadb ... Stopping policy-clamp-runtime-acm ... done Stopping policy-apex-pdp ... done Stopping policy-clamp-ac-k8s-ppnt ... done Stopping policy-clamp-ac-sim-ppnt ... done Stopping policy-clamp-ac-pf-ppnt ... done Stopping policy-clamp-ac-http-ppnt ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-clamp-runtime-acm ... Removing policy-apex-pdp ... Removing policy-pap ... Removing policy-clamp-ac-pf-ppnt ... Removing policy-api ... Removing policy-clamp-ac-http-ppnt ... Removing policy-clamp-ac-sim-ppnt ... Removing policy-clamp-ac-k8s-ppnt ... Removing policy-db-migrator ... Removing kafka ... Removing compose_zookeeper_1 ... Removing simulator ... Removing mariadb ... Removing policy-clamp-ac-http-ppnt ... done Removing policy-apex-pdp ... done Removing kafka ... done Removing policy-clamp-ac-k8s-ppnt ... done Removing policy-api ... done Removing policy-db-migrator ... done Removing policy-clamp-runtime-acm ... done Removing mariadb ... done Removing compose_zookeeper_1 ... done Removing simulator ... done Removing policy-clamp-ac-sim-ppnt ... done Removing policy-clamp-ac-pf-ppnt ... done Removing policy-pap ... done Removing network compose_default ++ cd /w/workspace/policy-clamp-master-project-csit-clamp + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.qcL4gSdRzk ]] + rsync -av /tmp/tmp.qcL4gSdRzk/ /w/workspace/policy-clamp-master-project-csit-clamp/csit/archives/clamp sending incremental file list ./ log.html output.xml report.html testplan.txt sent 681,617 bytes received 95 bytes 1,363,424.00 bytes/sec total size is 681,127 speedup is 1.00 + rm -rf /w/workspace/policy-clamp-master-project-csit-clamp/models + exit 6 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2128 killed; [ssh-agent] Stopped. Robot results publisher started... -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins11866239853761772478.sh ---> sysstat.sh [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins17548375880456670096.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-clamp-master-project-csit-clamp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-clamp-master-project-csit-clamp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-clamp-master-project-csit-clamp ']' + mkdir -p /w/workspace/policy-clamp-master-project-csit-clamp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-clamp-master-project-csit-clamp/archives/ [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins14208222128073478221.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-clamp-master-project-csit-clamp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5bqd from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-5bqd/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins1591992828826885945.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-clamp-master-project-csit-clamp@tmp/config12585531719287286194tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins10611622591703877658.sh ---> create-netrc.sh [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins6268796122097376972.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-clamp-master-project-csit-clamp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5bqd from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-5bqd/bin to PATH [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins17897554410778126578.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-clamp-master-project-csit-clamp] $ /bin/bash /tmp/jenkins9383908903804885278.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-clamp-master-project-csit-clamp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5bqd from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-5bqd/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-clamp-master-project-csit-clamp] $ /bin/bash -l /tmp/jenkins10296232373484170504.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-clamp-master-project-csit-clamp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5bqd from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-5bqd/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-clamp-master-project-csit-clamp/1083 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-12484 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 858 24752 0 6555 30852 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:c4:e0:b8 brd ff:ff:ff:ff:ff:ff inet 10.30.106.128/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85967sec preferred_lft 85967sec inet6 fe80::f816:3eff:fec4:e0b8/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:d3:7e:29:ad brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12484) 01/15/24 _x86_64_ (8 CPU) 15:23:11 LINUX RESTART (8 CPU) 15:24:02 tps rtps wtps bread/s bwrtn/s 15:25:01 96.66 15.49 81.17 1031.69 21046.47 15:26:01 128.20 23.08 105.12 2760.07 26379.60 15:27:01 159.88 0.12 159.76 9.86 95183.74 15:28:01 276.47 15.31 261.16 818.93 75567.77 15:29:01 24.41 0.57 23.85 41.86 13606.18 15:30:01 75.17 1.22 73.95 103.32 14413.63 Average: 126.88 9.28 117.60 793.61 41090.07 15:24:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 15:25:01 30009484 31678620 2929736 8.89 75280 1899712 1460856 4.30 906772 1725432 174288 15:26:01 28375568 31692008 4563652 13.85 111724 3449788 1374156 4.04 978460 3184684 1342440 15:27:01 25470976 31683048 7468244 22.67 129560 6221440 1405112 4.13 1005080 5953528 1664884 15:28:01 22427816 28821136 10511404 31.91 139304 6371664 10665016 31.38 4079880 5827800 1612 15:29:01 20839436 27267644 12099784 36.73 140164 6402348 12239152 36.01 5712512 5766196 144 15:30:01 25387400 31626380 7551820 22.93 143364 6227548 1524228 4.48 1422984 5594668 40832 Average: 25418447 30461473 7520773 22.83 123233 5095417 4778087 14.06 2350948 4675385 537367 15:24:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 15:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:25:01 ens3 63.11 41.94 968.20 9.50 0.00 0.00 0.00 0.00 15:25:01 lo 1.08 1.08 0.12 0.12 0.00 0.00 0.00 0.00 15:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:26:01 br-2b67f4c2f7f1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:26:01 ens3 616.36 351.99 10761.70 31.47 0.00 0.00 0.00 0.00 15:26:01 lo 7.20 7.20 0.74 0.74 0.00 0.00 0.00 0.00 15:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:27:01 br-2b67f4c2f7f1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:27:01 ens3 1002.93 562.25 30707.41 38.57 0.00 0.00 0.00 0.00 15:27:01 lo 7.66 7.66 0.70 0.70 0.00 0.00 0.00 0.00 15:28:01 veth64049d6 1.00 1.32 0.06 0.08 0.00 0.00 0.00 0.00 15:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:28:01 veth65271d0 1.58 2.03 0.20 0.20 0.00 0.00 0.00 0.00 15:28:01 veth6e28915 0.80 0.97 0.05 0.05 0.00 0.00 0.00 0.00 15:29:01 veth64049d6 3.92 5.15 0.60 0.53 0.00 0.00 0.00 0.00 15:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:29:01 veth65271d0 3.70 5.35 0.59 0.50 0.00 0.00 0.00 0.00 15:29:01 veth6e28915 3.95 5.00 0.59 0.50 0.00 0.00 0.00 0.00 15:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:30:01 ens3 2207.07 1299.85 43400.97 184.42 0.00 0.00 0.00 0.00 15:30:01 lo 26.95 26.95 5.21 5.21 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 295.51 170.89 7104.31 20.49 0.00 0.00 0.00 0.00 Average: lo 3.78 3.78 0.81 0.81 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12484) 01/15/24 _x86_64_ (8 CPU) 15:23:11 LINUX RESTART (8 CPU) 15:24:02 CPU %user %nice %system %iowait %steal %idle 15:25:01 all 10.04 0.00 0.75 2.44 0.03 86.74 15:25:01 0 2.20 0.00 0.19 0.22 0.02 97.37 15:25:01 1 20.14 0.00 1.20 2.24 0.05 76.36 15:25:01 2 20.22 0.00 1.22 1.04 0.05 77.47 15:25:01 3 17.48 0.00 1.43 0.58 0.03 80.49 15:25:01 4 2.30 0.00 0.32 0.12 0.02 97.24 15:25:01 5 12.50 0.00 1.12 11.92 0.05 74.41 15:25:01 6 3.02 0.00 0.18 0.17 0.02 96.61 15:25:01 7 2.61 0.00 0.34 3.27 0.02 93.76 15:26:01 all 11.36 0.00 2.73 2.25 0.05 83.60 15:26:01 0 6.33 0.00 2.27 3.75 0.03 87.61 15:26:01 1 4.91 0.00 3.20 1.02 0.05 90.81 15:26:01 2 6.08 0.00 2.20 0.23 0.03 91.45 15:26:01 3 4.49 0.00 2.22 0.25 0.03 93.01 15:26:01 4 30.20 0.00 3.30 3.22 0.08 63.20 15:26:01 5 14.57 0.00 2.42 8.70 0.03 74.27 15:26:01 6 13.95 0.00 3.13 0.15 0.05 82.71 15:26:01 7 10.43 0.00 3.14 0.69 0.07 85.67 15:27:01 all 10.86 0.00 5.19 6.04 0.08 77.83 15:27:01 0 11.64 0.00 5.78 2.40 0.10 80.08 15:27:01 1 12.47 0.00 6.55 9.26 0.07 71.65 15:27:01 2 10.90 0.00 4.67 12.96 0.07 71.40 15:27:01 3 11.13 0.00 4.13 0.42 0.07 84.24 15:27:01 4 8.72 0.00 5.25 13.78 0.05 72.20 15:27:01 5 11.25 0.00 5.25 4.49 0.08 78.93 15:27:01 6 10.08 0.00 4.76 4.89 0.07 80.21 15:27:01 7 10.70 0.00 5.12 0.20 0.09 83.90 15:28:01 all 43.21 0.00 5.88 6.70 0.10 44.12 15:28:01 0 44.43 0.00 6.56 3.34 0.10 45.56 15:28:01 1 41.03 0.00 5.82 13.60 0.10 39.46 15:28:01 2 38.82 0.00 5.33 1.98 0.08 53.79 15:28:01 3 41.46 0.00 5.76 1.47 0.10 51.20 15:28:01 4 46.38 0.00 5.41 6.96 0.10 41.16 15:28:01 5 43.36 0.00 6.34 22.95 0.10 27.24 15:28:01 6 44.10 0.00 6.03 1.25 0.14 48.48 15:28:01 7 46.05 0.00 5.80 2.09 0.08 45.98 15:29:01 all 28.72 0.00 3.10 0.68 0.09 67.40 15:29:01 0 30.39 0.00 3.50 0.00 0.08 66.02 15:29:01 1 24.75 0.00 2.53 5.05 0.08 67.59 15:29:01 2 31.10 0.00 3.27 0.05 0.10 65.48 15:29:01 3 26.05 0.00 2.80 0.05 0.08 71.02 15:29:01 4 28.23 0.00 3.26 0.22 0.10 68.19 15:29:01 5 24.97 0.00 2.61 0.07 0.10 72.24 15:29:01 6 34.12 0.00 3.44 0.00 0.12 62.32 15:29:01 7 30.18 0.00 3.39 0.00 0.08 66.34 15:30:01 all 5.41 0.00 0.83 1.00 0.04 92.72 15:30:01 0 1.62 0.00 0.67 0.18 0.03 97.50 15:30:01 1 7.56 0.00 1.17 5.62 0.03 85.62 15:30:01 2 1.31 0.00 0.67 1.56 0.03 96.42 15:30:01 3 1.09 0.00 0.65 0.13 0.05 98.08 15:30:01 4 3.53 0.00 0.79 0.03 0.05 95.60 15:30:01 5 24.52 0.00 1.07 0.32 0.05 74.04 15:30:01 6 2.42 0.00 0.80 0.05 0.02 96.71 15:30:01 7 1.27 0.00 0.82 0.08 0.03 97.80 Average: all 18.26 0.00 3.07 3.17 0.06 75.43 Average: 0 16.10 0.00 3.16 1.64 0.06 79.03 Average: 1 18.45 0.00 3.40 6.12 0.06 71.96 Average: 2 18.03 0.00 2.88 2.95 0.06 76.07 Average: 3 16.90 0.00 2.83 0.48 0.06 79.72 Average: 4 19.91 0.00 3.05 4.04 0.07 72.94 Average: 5 21.87 0.00 3.13 8.04 0.07 66.89 Average: 6 17.93 0.00 3.05 1.08 0.07 77.88 Average: 7 16.86 0.00 3.09 1.05 0.06 78.94